00:00:00.001 Started by upstream project "autotest-nightly" build number 4133 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3495 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.122 Fetching changes from the remote Git repository 00:00:00.125 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.254 > git --version # 'git version 2.39.2' 00:00:00.254 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.309 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.309 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.929 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.940 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.952 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:04.952 > git config core.sparsecheckout # timeout=10 00:00:04.963 > git read-tree -mu HEAD # timeout=10 00:00:04.978 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:04.997 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:04.997 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:05.116 [Pipeline] Start of Pipeline 00:00:05.130 [Pipeline] library 00:00:05.132 Loading library shm_lib@master 00:00:05.132 Library shm_lib@master is cached. Copying from home. 00:00:05.151 [Pipeline] node 00:00:05.168 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:05.170 [Pipeline] { 00:00:05.182 [Pipeline] catchError 00:00:05.184 [Pipeline] { 00:00:05.195 [Pipeline] wrap 00:00:05.203 [Pipeline] { 00:00:05.209 [Pipeline] stage 00:00:05.210 [Pipeline] { (Prologue) 00:00:05.224 [Pipeline] echo 00:00:05.225 Node: VM-host-WFP7 00:00:05.230 [Pipeline] cleanWs 00:00:05.238 [WS-CLEANUP] Deleting project workspace... 00:00:05.238 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.245 [WS-CLEANUP] done 00:00:05.418 [Pipeline] setCustomBuildProperty 00:00:05.508 [Pipeline] httpRequest 00:00:06.205 [Pipeline] echo 00:00:06.206 Sorcerer 10.211.164.101 is alive 00:00:06.211 [Pipeline] retry 00:00:06.213 [Pipeline] { 00:00:06.223 [Pipeline] httpRequest 00:00:06.228 HttpMethod: GET 00:00:06.228 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:06.230 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:06.236 Response Code: HTTP/1.1 200 OK 00:00:06.237 Success: Status code 200 is in the accepted range: 200,404 00:00:06.237 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:09.415 [Pipeline] } 00:00:09.425 [Pipeline] // retry 00:00:09.432 [Pipeline] sh 00:00:09.712 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:09.727 [Pipeline] httpRequest 00:00:10.105 [Pipeline] echo 00:00:10.107 Sorcerer 10.211.164.101 is alive 00:00:10.115 [Pipeline] retry 00:00:10.116 [Pipeline] { 00:00:10.132 [Pipeline] httpRequest 00:00:10.137 HttpMethod: GET 00:00:10.138 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:10.139 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:10.157 Response Code: HTTP/1.1 200 OK 00:00:10.157 Success: Status code 200 is in the accepted range: 200,404 00:00:10.158 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:27.246 [Pipeline] } 00:01:27.260 [Pipeline] // retry 00:01:27.265 [Pipeline] sh 00:01:27.548 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:30.097 [Pipeline] sh 00:01:30.380 + git -C spdk log --oneline -n5 00:01:30.380 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:30.380 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:30.380 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:30.380 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:01:30.380 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:01:30.396 [Pipeline] writeFile 00:01:30.410 [Pipeline] sh 00:01:30.694 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:30.707 [Pipeline] sh 00:01:30.992 + cat autorun-spdk.conf 00:01:30.992 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.992 SPDK_RUN_ASAN=1 00:01:30.992 SPDK_RUN_UBSAN=1 00:01:30.992 SPDK_TEST_RAID=1 00:01:30.992 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.002 RUN_NIGHTLY=1 00:01:31.004 [Pipeline] } 00:01:31.018 [Pipeline] // stage 00:01:31.032 [Pipeline] stage 00:01:31.034 [Pipeline] { (Run VM) 00:01:31.047 [Pipeline] sh 00:01:31.332 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:31.332 + echo 'Start stage prepare_nvme.sh' 00:01:31.332 Start stage prepare_nvme.sh 00:01:31.332 + [[ -n 1 ]] 00:01:31.332 + disk_prefix=ex1 00:01:31.332 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:31.332 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:31.332 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:31.332 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.332 ++ SPDK_RUN_ASAN=1 00:01:31.332 ++ SPDK_RUN_UBSAN=1 00:01:31.332 ++ SPDK_TEST_RAID=1 00:01:31.332 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.332 ++ RUN_NIGHTLY=1 00:01:31.332 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:31.332 + nvme_files=() 00:01:31.332 + declare -A nvme_files 00:01:31.332 + backend_dir=/var/lib/libvirt/images/backends 00:01:31.332 + nvme_files['nvme.img']=5G 00:01:31.332 + nvme_files['nvme-cmb.img']=5G 00:01:31.332 + nvme_files['nvme-multi0.img']=4G 00:01:31.332 + nvme_files['nvme-multi1.img']=4G 00:01:31.332 + nvme_files['nvme-multi2.img']=4G 00:01:31.332 + nvme_files['nvme-openstack.img']=8G 00:01:31.332 + nvme_files['nvme-zns.img']=5G 00:01:31.332 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:31.332 + (( SPDK_TEST_FTL == 1 )) 00:01:31.332 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:31.332 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:31.332 + for nvme in "${!nvme_files[@]}" 00:01:31.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:31.332 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.332 + for nvme in "${!nvme_files[@]}" 00:01:31.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:31.332 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.332 + for nvme in "${!nvme_files[@]}" 00:01:31.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:31.332 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:31.332 + for nvme in "${!nvme_files[@]}" 00:01:31.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:31.332 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.332 + for nvme in "${!nvme_files[@]}" 00:01:31.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:31.332 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.332 + for nvme in "${!nvme_files[@]}" 00:01:31.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:31.592 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.592 + for nvme in "${!nvme_files[@]}" 00:01:31.592 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:31.592 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.592 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:31.592 + echo 'End stage prepare_nvme.sh' 00:01:31.592 End stage prepare_nvme.sh 00:01:31.605 [Pipeline] sh 00:01:31.909 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:31.910 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:31.910 00:01:31.910 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:31.910 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:31.910 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:31.910 HELP=0 00:01:31.910 DRY_RUN=0 00:01:31.910 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:31.910 NVME_DISKS_TYPE=nvme,nvme, 00:01:31.910 NVME_AUTO_CREATE=0 00:01:31.910 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:31.910 NVME_CMB=,, 00:01:31.910 NVME_PMR=,, 00:01:31.910 NVME_ZNS=,, 00:01:31.910 NVME_MS=,, 00:01:31.910 NVME_FDP=,, 00:01:31.910 SPDK_VAGRANT_DISTRO=fedora39 00:01:31.910 SPDK_VAGRANT_VMCPU=10 00:01:31.910 SPDK_VAGRANT_VMRAM=12288 00:01:31.910 SPDK_VAGRANT_PROVIDER=libvirt 00:01:31.910 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:31.910 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:31.910 SPDK_OPENSTACK_NETWORK=0 00:01:31.910 VAGRANT_PACKAGE_BOX=0 00:01:31.910 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:31.910 FORCE_DISTRO=true 00:01:31.910 VAGRANT_BOX_VERSION= 00:01:31.910 EXTRA_VAGRANTFILES= 00:01:31.910 NIC_MODEL=virtio 00:01:31.910 00:01:31.910 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:31.910 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:34.461 Bringing machine 'default' up with 'libvirt' provider... 00:01:34.719 ==> default: Creating image (snapshot of base box volume). 00:01:34.978 ==> default: Creating domain with the following settings... 00:01:34.978 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727698726_ef9532f1a122856e94e8 00:01:34.978 ==> default: -- Domain type: kvm 00:01:34.978 ==> default: -- Cpus: 10 00:01:34.978 ==> default: -- Feature: acpi 00:01:34.978 ==> default: -- Feature: apic 00:01:34.979 ==> default: -- Feature: pae 00:01:34.979 ==> default: -- Memory: 12288M 00:01:34.979 ==> default: -- Memory Backing: hugepages: 00:01:34.979 ==> default: -- Management MAC: 00:01:34.979 ==> default: -- Loader: 00:01:34.979 ==> default: -- Nvram: 00:01:34.979 ==> default: -- Base box: spdk/fedora39 00:01:34.979 ==> default: -- Storage pool: default 00:01:34.979 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727698726_ef9532f1a122856e94e8.img (20G) 00:01:34.979 ==> default: -- Volume Cache: default 00:01:34.979 ==> default: -- Kernel: 00:01:34.979 ==> default: -- Initrd: 00:01:34.979 ==> default: -- Graphics Type: vnc 00:01:34.979 ==> default: -- Graphics Port: -1 00:01:34.979 ==> default: -- Graphics IP: 127.0.0.1 00:01:34.979 ==> default: -- Graphics Password: Not defined 00:01:34.979 ==> default: -- Video Type: cirrus 00:01:34.979 ==> default: -- Video VRAM: 9216 00:01:34.979 ==> default: -- Sound Type: 00:01:34.979 ==> default: -- Keymap: en-us 00:01:34.979 ==> default: -- TPM Path: 00:01:34.979 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:34.979 ==> default: -- Command line args: 00:01:34.979 ==> default: -> value=-device, 00:01:34.979 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:34.979 ==> default: -> value=-drive, 00:01:34.979 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:34.979 ==> default: -> value=-device, 00:01:34.979 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.979 ==> default: -> value=-device, 00:01:34.979 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:34.979 ==> default: -> value=-drive, 00:01:34.979 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:34.979 ==> default: -> value=-device, 00:01:34.979 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.979 ==> default: -> value=-drive, 00:01:34.979 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:34.979 ==> default: -> value=-device, 00:01:34.979 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.979 ==> default: -> value=-drive, 00:01:34.979 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:34.979 ==> default: -> value=-device, 00:01:34.979 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.979 ==> default: Creating shared folders metadata... 00:01:34.979 ==> default: Starting domain. 00:01:36.887 ==> default: Waiting for domain to get an IP address... 00:01:55.003 ==> default: Waiting for SSH to become available... 00:01:56.385 ==> default: Configuring and enabling network interfaces... 00:02:02.965 default: SSH address: 192.168.121.57:22 00:02:02.965 default: SSH username: vagrant 00:02:02.965 default: SSH auth method: private key 00:02:05.503 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:15.509 ==> default: Mounting SSHFS shared folder... 00:02:16.450 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:16.450 ==> default: Checking Mount.. 00:02:18.360 ==> default: Folder Successfully Mounted! 00:02:18.360 ==> default: Running provisioner: file... 00:02:19.299 default: ~/.gitconfig => .gitconfig 00:02:19.869 00:02:19.869 SUCCESS! 00:02:19.869 00:02:19.869 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:19.869 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:19.869 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:19.869 00:02:19.879 [Pipeline] } 00:02:19.892 [Pipeline] // stage 00:02:19.900 [Pipeline] dir 00:02:19.901 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:19.903 [Pipeline] { 00:02:19.915 [Pipeline] catchError 00:02:19.917 [Pipeline] { 00:02:19.930 [Pipeline] sh 00:02:20.211 + vagrant ssh-config --host vagrant 00:02:20.211 + sed -ne /^Host/,$p 00:02:20.211 + tee ssh_conf 00:02:22.747 Host vagrant 00:02:22.747 HostName 192.168.121.57 00:02:22.747 User vagrant 00:02:22.747 Port 22 00:02:22.747 UserKnownHostsFile /dev/null 00:02:22.747 StrictHostKeyChecking no 00:02:22.747 PasswordAuthentication no 00:02:22.747 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:22.747 IdentitiesOnly yes 00:02:22.747 LogLevel FATAL 00:02:22.747 ForwardAgent yes 00:02:22.747 ForwardX11 yes 00:02:22.747 00:02:22.760 [Pipeline] withEnv 00:02:22.762 [Pipeline] { 00:02:22.774 [Pipeline] sh 00:02:23.055 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:23.055 source /etc/os-release 00:02:23.055 [[ -e /image.version ]] && img=$(< /image.version) 00:02:23.055 # Minimal, systemd-like check. 00:02:23.055 if [[ -e /.dockerenv ]]; then 00:02:23.055 # Clear garbage from the node's name: 00:02:23.055 # agt-er_autotest_547-896 -> autotest_547-896 00:02:23.055 # $HOSTNAME is the actual container id 00:02:23.055 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:23.055 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:23.055 # We can assume this is a mount from a host where container is running, 00:02:23.055 # so fetch its hostname to easily identify the target swarm worker. 00:02:23.055 container="$(< /etc/hostname) ($agent)" 00:02:23.055 else 00:02:23.055 # Fallback 00:02:23.056 container=$agent 00:02:23.056 fi 00:02:23.056 fi 00:02:23.056 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:23.056 00:02:23.328 [Pipeline] } 00:02:23.343 [Pipeline] // withEnv 00:02:23.351 [Pipeline] setCustomBuildProperty 00:02:23.364 [Pipeline] stage 00:02:23.365 [Pipeline] { (Tests) 00:02:23.381 [Pipeline] sh 00:02:23.663 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:23.953 [Pipeline] sh 00:02:24.291 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:24.567 [Pipeline] timeout 00:02:24.568 Timeout set to expire in 1 hr 30 min 00:02:24.569 [Pipeline] { 00:02:24.584 [Pipeline] sh 00:02:24.870 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:25.439 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:25.451 [Pipeline] sh 00:02:25.778 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:26.053 [Pipeline] sh 00:02:26.338 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:26.615 [Pipeline] sh 00:02:26.899 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:27.158 ++ readlink -f spdk_repo 00:02:27.158 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:27.158 + [[ -n /home/vagrant/spdk_repo ]] 00:02:27.159 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:27.159 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:27.159 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:27.159 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:27.159 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:27.159 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:27.159 + cd /home/vagrant/spdk_repo 00:02:27.159 + source /etc/os-release 00:02:27.159 ++ NAME='Fedora Linux' 00:02:27.159 ++ VERSION='39 (Cloud Edition)' 00:02:27.159 ++ ID=fedora 00:02:27.159 ++ VERSION_ID=39 00:02:27.159 ++ VERSION_CODENAME= 00:02:27.159 ++ PLATFORM_ID=platform:f39 00:02:27.159 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:27.159 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:27.159 ++ LOGO=fedora-logo-icon 00:02:27.159 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:27.159 ++ HOME_URL=https://fedoraproject.org/ 00:02:27.159 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:27.159 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:27.159 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:27.159 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:27.159 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:27.159 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:27.159 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:27.159 ++ SUPPORT_END=2024-11-12 00:02:27.159 ++ VARIANT='Cloud Edition' 00:02:27.159 ++ VARIANT_ID=cloud 00:02:27.159 + uname -a 00:02:27.159 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:27.159 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:27.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:27.726 Hugepages 00:02:27.726 node hugesize free / total 00:02:27.726 node0 1048576kB 0 / 0 00:02:27.726 node0 2048kB 0 / 0 00:02:27.726 00:02:27.726 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:27.727 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:27.727 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:27.727 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:27.727 + rm -f /tmp/spdk-ld-path 00:02:27.727 + source autorun-spdk.conf 00:02:27.727 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.727 ++ SPDK_RUN_ASAN=1 00:02:27.727 ++ SPDK_RUN_UBSAN=1 00:02:27.727 ++ SPDK_TEST_RAID=1 00:02:27.727 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:27.727 ++ RUN_NIGHTLY=1 00:02:27.727 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:27.727 + [[ -n '' ]] 00:02:27.727 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:27.727 + for M in /var/spdk/build-*-manifest.txt 00:02:27.727 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:27.727 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.727 + for M in /var/spdk/build-*-manifest.txt 00:02:27.727 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:27.727 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.986 + for M in /var/spdk/build-*-manifest.txt 00:02:27.986 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:27.986 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.986 ++ uname 00:02:27.986 + [[ Linux == \L\i\n\u\x ]] 00:02:27.986 + sudo dmesg -T 00:02:27.986 + sudo dmesg --clear 00:02:27.986 + dmesg_pid=5422 00:02:27.986 + sudo dmesg -Tw 00:02:27.986 + [[ Fedora Linux == FreeBSD ]] 00:02:27.986 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.986 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.987 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:27.987 + [[ -x /usr/src/fio-static/fio ]] 00:02:27.987 + export FIO_BIN=/usr/src/fio-static/fio 00:02:27.987 + FIO_BIN=/usr/src/fio-static/fio 00:02:27.987 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:27.987 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:27.987 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:27.987 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.987 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.987 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:27.987 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.987 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.987 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:27.987 Test configuration: 00:02:27.987 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.987 SPDK_RUN_ASAN=1 00:02:27.987 SPDK_RUN_UBSAN=1 00:02:27.987 SPDK_TEST_RAID=1 00:02:27.987 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:27.987 RUN_NIGHTLY=1 12:19:39 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:27.987 12:19:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:27.987 12:19:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:27.987 12:19:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:27.987 12:19:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.987 12:19:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.987 12:19:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.987 12:19:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.987 12:19:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.987 12:19:39 -- paths/export.sh@5 -- $ export PATH 00:02:27.987 12:19:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.246 12:19:39 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:28.246 12:19:39 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:28.246 12:19:39 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727698779.XXXXXX 00:02:28.246 12:19:39 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727698779.Bo3xMy 00:02:28.246 12:19:39 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:28.246 12:19:39 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:02:28.246 12:19:39 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:28.246 12:19:39 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:28.246 12:19:39 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:28.246 12:19:39 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:28.246 12:19:39 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:28.246 12:19:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.246 12:19:39 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:28.246 12:19:39 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:28.246 12:19:39 -- pm/common@17 -- $ local monitor 00:02:28.246 12:19:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.246 12:19:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.246 12:19:39 -- pm/common@25 -- $ sleep 1 00:02:28.246 12:19:39 -- pm/common@21 -- $ date +%s 00:02:28.246 12:19:39 -- pm/common@21 -- $ date +%s 00:02:28.246 12:19:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727698779 00:02:28.246 12:19:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727698779 00:02:28.246 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727698779_collect-cpu-load.pm.log 00:02:28.246 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727698779_collect-vmstat.pm.log 00:02:29.185 12:19:40 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:29.185 12:19:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:29.185 12:19:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:29.185 12:19:40 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:29.185 12:19:40 -- spdk/autobuild.sh@16 -- $ date -u 00:02:29.185 Mon Sep 30 12:19:40 PM UTC 2024 00:02:29.185 12:19:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:29.185 v25.01-pre-17-g09cc66129 00:02:29.185 12:19:40 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:29.185 12:19:40 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:29.185 12:19:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:29.185 12:19:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:29.185 12:19:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.185 ************************************ 00:02:29.185 START TEST asan 00:02:29.185 ************************************ 00:02:29.185 using asan 00:02:29.185 12:19:40 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:29.185 00:02:29.185 real 0m0.001s 00:02:29.185 user 0m0.001s 00:02:29.185 sys 0m0.000s 00:02:29.185 12:19:40 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:29.185 12:19:40 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:29.185 ************************************ 00:02:29.185 END TEST asan 00:02:29.186 ************************************ 00:02:29.186 12:19:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:29.186 12:19:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:29.186 12:19:41 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:29.186 12:19:41 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:29.186 12:19:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.186 ************************************ 00:02:29.186 START TEST ubsan 00:02:29.186 ************************************ 00:02:29.186 using ubsan 00:02:29.186 12:19:41 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:29.186 00:02:29.186 real 0m0.000s 00:02:29.186 user 0m0.000s 00:02:29.186 sys 0m0.000s 00:02:29.186 12:19:41 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:29.186 12:19:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:29.186 ************************************ 00:02:29.186 END TEST ubsan 00:02:29.186 ************************************ 00:02:29.446 12:19:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:29.446 12:19:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:29.446 12:19:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:29.446 12:19:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:29.446 12:19:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:29.446 12:19:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:29.446 12:19:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:29.446 12:19:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:29.446 12:19:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:29.446 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:29.446 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:30.014 Using 'verbs' RDMA provider 00:02:45.881 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:03.987 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:03.987 Creating mk/config.mk...done. 00:03:03.987 Creating mk/cc.flags.mk...done. 00:03:03.987 Type 'make' to build. 00:03:03.987 12:20:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:03.987 12:20:14 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:03.987 12:20:14 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:03.987 12:20:14 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.987 ************************************ 00:03:03.987 START TEST make 00:03:03.987 ************************************ 00:03:03.987 12:20:14 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:03.987 make[1]: Nothing to be done for 'all'. 00:03:13.984 The Meson build system 00:03:13.984 Version: 1.5.0 00:03:13.984 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:13.984 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:13.984 Build type: native build 00:03:13.984 Program cat found: YES (/usr/bin/cat) 00:03:13.984 Project name: DPDK 00:03:13.984 Project version: 24.03.0 00:03:13.984 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:13.984 C linker for the host machine: cc ld.bfd 2.40-14 00:03:13.984 Host machine cpu family: x86_64 00:03:13.984 Host machine cpu: x86_64 00:03:13.984 Message: ## Building in Developer Mode ## 00:03:13.984 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:13.984 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:13.984 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:13.984 Program python3 found: YES (/usr/bin/python3) 00:03:13.984 Program cat found: YES (/usr/bin/cat) 00:03:13.984 Compiler for C supports arguments -march=native: YES 00:03:13.984 Checking for size of "void *" : 8 00:03:13.984 Checking for size of "void *" : 8 (cached) 00:03:13.984 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:13.984 Library m found: YES 00:03:13.984 Library numa found: YES 00:03:13.984 Has header "numaif.h" : YES 00:03:13.984 Library fdt found: NO 00:03:13.984 Library execinfo found: NO 00:03:13.984 Has header "execinfo.h" : YES 00:03:13.984 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:13.984 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:13.984 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:13.984 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:13.984 Run-time dependency openssl found: YES 3.1.1 00:03:13.984 Run-time dependency libpcap found: YES 1.10.4 00:03:13.984 Has header "pcap.h" with dependency libpcap: YES 00:03:13.984 Compiler for C supports arguments -Wcast-qual: YES 00:03:13.984 Compiler for C supports arguments -Wdeprecated: YES 00:03:13.984 Compiler for C supports arguments -Wformat: YES 00:03:13.984 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:13.984 Compiler for C supports arguments -Wformat-security: NO 00:03:13.984 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:13.984 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:13.984 Compiler for C supports arguments -Wnested-externs: YES 00:03:13.984 Compiler for C supports arguments -Wold-style-definition: YES 00:03:13.984 Compiler for C supports arguments -Wpointer-arith: YES 00:03:13.984 Compiler for C supports arguments -Wsign-compare: YES 00:03:13.984 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:13.984 Compiler for C supports arguments -Wundef: YES 00:03:13.984 Compiler for C supports arguments -Wwrite-strings: YES 00:03:13.984 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:13.984 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:13.984 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:13.984 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:13.984 Program objdump found: YES (/usr/bin/objdump) 00:03:13.984 Compiler for C supports arguments -mavx512f: YES 00:03:13.984 Checking if "AVX512 checking" compiles: YES 00:03:13.984 Fetching value of define "__SSE4_2__" : 1 00:03:13.984 Fetching value of define "__AES__" : 1 00:03:13.984 Fetching value of define "__AVX__" : 1 00:03:13.984 Fetching value of define "__AVX2__" : 1 00:03:13.984 Fetching value of define "__AVX512BW__" : 1 00:03:13.984 Fetching value of define "__AVX512CD__" : 1 00:03:13.984 Fetching value of define "__AVX512DQ__" : 1 00:03:13.984 Fetching value of define "__AVX512F__" : 1 00:03:13.984 Fetching value of define "__AVX512VL__" : 1 00:03:13.984 Fetching value of define "__PCLMUL__" : 1 00:03:13.984 Fetching value of define "__RDRND__" : 1 00:03:13.984 Fetching value of define "__RDSEED__" : 1 00:03:13.984 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:13.984 Fetching value of define "__znver1__" : (undefined) 00:03:13.984 Fetching value of define "__znver2__" : (undefined) 00:03:13.984 Fetching value of define "__znver3__" : (undefined) 00:03:13.984 Fetching value of define "__znver4__" : (undefined) 00:03:13.984 Library asan found: YES 00:03:13.984 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:13.984 Message: lib/log: Defining dependency "log" 00:03:13.984 Message: lib/kvargs: Defining dependency "kvargs" 00:03:13.984 Message: lib/telemetry: Defining dependency "telemetry" 00:03:13.984 Library rt found: YES 00:03:13.984 Checking for function "getentropy" : NO 00:03:13.984 Message: lib/eal: Defining dependency "eal" 00:03:13.984 Message: lib/ring: Defining dependency "ring" 00:03:13.984 Message: lib/rcu: Defining dependency "rcu" 00:03:13.984 Message: lib/mempool: Defining dependency "mempool" 00:03:13.984 Message: lib/mbuf: Defining dependency "mbuf" 00:03:13.984 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:13.984 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:13.984 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:13.984 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:13.984 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:13.984 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:13.984 Compiler for C supports arguments -mpclmul: YES 00:03:13.984 Compiler for C supports arguments -maes: YES 00:03:13.984 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:13.985 Compiler for C supports arguments -mavx512bw: YES 00:03:13.985 Compiler for C supports arguments -mavx512dq: YES 00:03:13.985 Compiler for C supports arguments -mavx512vl: YES 00:03:13.985 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:13.985 Compiler for C supports arguments -mavx2: YES 00:03:13.985 Compiler for C supports arguments -mavx: YES 00:03:13.985 Message: lib/net: Defining dependency "net" 00:03:13.985 Message: lib/meter: Defining dependency "meter" 00:03:13.985 Message: lib/ethdev: Defining dependency "ethdev" 00:03:13.985 Message: lib/pci: Defining dependency "pci" 00:03:13.985 Message: lib/cmdline: Defining dependency "cmdline" 00:03:13.985 Message: lib/hash: Defining dependency "hash" 00:03:13.985 Message: lib/timer: Defining dependency "timer" 00:03:13.985 Message: lib/compressdev: Defining dependency "compressdev" 00:03:13.985 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:13.985 Message: lib/dmadev: Defining dependency "dmadev" 00:03:13.985 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:13.985 Message: lib/power: Defining dependency "power" 00:03:13.985 Message: lib/reorder: Defining dependency "reorder" 00:03:13.985 Message: lib/security: Defining dependency "security" 00:03:13.985 Has header "linux/userfaultfd.h" : YES 00:03:13.985 Has header "linux/vduse.h" : YES 00:03:13.985 Message: lib/vhost: Defining dependency "vhost" 00:03:13.985 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:13.985 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:13.985 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:13.985 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:13.985 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:13.985 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:13.985 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:13.985 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:13.985 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:13.985 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:13.985 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:13.985 Configuring doxy-api-html.conf using configuration 00:03:13.985 Configuring doxy-api-man.conf using configuration 00:03:13.985 Program mandb found: YES (/usr/bin/mandb) 00:03:13.985 Program sphinx-build found: NO 00:03:13.985 Configuring rte_build_config.h using configuration 00:03:13.985 Message: 00:03:13.985 ================= 00:03:13.985 Applications Enabled 00:03:13.985 ================= 00:03:13.985 00:03:13.985 apps: 00:03:13.985 00:03:13.985 00:03:13.985 Message: 00:03:13.985 ================= 00:03:13.985 Libraries Enabled 00:03:13.985 ================= 00:03:13.985 00:03:13.985 libs: 00:03:13.985 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:13.985 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:13.985 cryptodev, dmadev, power, reorder, security, vhost, 00:03:13.985 00:03:13.985 Message: 00:03:13.985 =============== 00:03:13.985 Drivers Enabled 00:03:13.985 =============== 00:03:13.985 00:03:13.985 common: 00:03:13.985 00:03:13.985 bus: 00:03:13.985 pci, vdev, 00:03:13.985 mempool: 00:03:13.985 ring, 00:03:13.985 dma: 00:03:13.985 00:03:13.985 net: 00:03:13.985 00:03:13.985 crypto: 00:03:13.985 00:03:13.985 compress: 00:03:13.985 00:03:13.985 vdpa: 00:03:13.985 00:03:13.985 00:03:13.985 Message: 00:03:13.985 ================= 00:03:13.985 Content Skipped 00:03:13.985 ================= 00:03:13.985 00:03:13.985 apps: 00:03:13.985 dumpcap: explicitly disabled via build config 00:03:13.985 graph: explicitly disabled via build config 00:03:13.985 pdump: explicitly disabled via build config 00:03:13.985 proc-info: explicitly disabled via build config 00:03:13.985 test-acl: explicitly disabled via build config 00:03:13.985 test-bbdev: explicitly disabled via build config 00:03:13.985 test-cmdline: explicitly disabled via build config 00:03:13.985 test-compress-perf: explicitly disabled via build config 00:03:13.985 test-crypto-perf: explicitly disabled via build config 00:03:13.985 test-dma-perf: explicitly disabled via build config 00:03:13.985 test-eventdev: explicitly disabled via build config 00:03:13.985 test-fib: explicitly disabled via build config 00:03:13.985 test-flow-perf: explicitly disabled via build config 00:03:13.985 test-gpudev: explicitly disabled via build config 00:03:13.985 test-mldev: explicitly disabled via build config 00:03:13.985 test-pipeline: explicitly disabled via build config 00:03:13.985 test-pmd: explicitly disabled via build config 00:03:13.985 test-regex: explicitly disabled via build config 00:03:13.985 test-sad: explicitly disabled via build config 00:03:13.985 test-security-perf: explicitly disabled via build config 00:03:13.985 00:03:13.985 libs: 00:03:13.985 argparse: explicitly disabled via build config 00:03:13.985 metrics: explicitly disabled via build config 00:03:13.985 acl: explicitly disabled via build config 00:03:13.985 bbdev: explicitly disabled via build config 00:03:13.985 bitratestats: explicitly disabled via build config 00:03:13.985 bpf: explicitly disabled via build config 00:03:13.985 cfgfile: explicitly disabled via build config 00:03:13.985 distributor: explicitly disabled via build config 00:03:13.985 efd: explicitly disabled via build config 00:03:13.985 eventdev: explicitly disabled via build config 00:03:13.985 dispatcher: explicitly disabled via build config 00:03:13.985 gpudev: explicitly disabled via build config 00:03:13.985 gro: explicitly disabled via build config 00:03:13.985 gso: explicitly disabled via build config 00:03:13.985 ip_frag: explicitly disabled via build config 00:03:13.985 jobstats: explicitly disabled via build config 00:03:13.985 latencystats: explicitly disabled via build config 00:03:13.985 lpm: explicitly disabled via build config 00:03:13.985 member: explicitly disabled via build config 00:03:13.985 pcapng: explicitly disabled via build config 00:03:13.985 rawdev: explicitly disabled via build config 00:03:13.985 regexdev: explicitly disabled via build config 00:03:13.985 mldev: explicitly disabled via build config 00:03:13.985 rib: explicitly disabled via build config 00:03:13.985 sched: explicitly disabled via build config 00:03:13.985 stack: explicitly disabled via build config 00:03:13.985 ipsec: explicitly disabled via build config 00:03:13.985 pdcp: explicitly disabled via build config 00:03:13.985 fib: explicitly disabled via build config 00:03:13.985 port: explicitly disabled via build config 00:03:13.985 pdump: explicitly disabled via build config 00:03:13.985 table: explicitly disabled via build config 00:03:13.985 pipeline: explicitly disabled via build config 00:03:13.985 graph: explicitly disabled via build config 00:03:13.985 node: explicitly disabled via build config 00:03:13.985 00:03:13.985 drivers: 00:03:13.985 common/cpt: not in enabled drivers build config 00:03:13.985 common/dpaax: not in enabled drivers build config 00:03:13.985 common/iavf: not in enabled drivers build config 00:03:13.985 common/idpf: not in enabled drivers build config 00:03:13.985 common/ionic: not in enabled drivers build config 00:03:13.985 common/mvep: not in enabled drivers build config 00:03:13.985 common/octeontx: not in enabled drivers build config 00:03:13.985 bus/auxiliary: not in enabled drivers build config 00:03:13.985 bus/cdx: not in enabled drivers build config 00:03:13.985 bus/dpaa: not in enabled drivers build config 00:03:13.985 bus/fslmc: not in enabled drivers build config 00:03:13.985 bus/ifpga: not in enabled drivers build config 00:03:13.985 bus/platform: not in enabled drivers build config 00:03:13.985 bus/uacce: not in enabled drivers build config 00:03:13.985 bus/vmbus: not in enabled drivers build config 00:03:13.985 common/cnxk: not in enabled drivers build config 00:03:13.985 common/mlx5: not in enabled drivers build config 00:03:13.985 common/nfp: not in enabled drivers build config 00:03:13.985 common/nitrox: not in enabled drivers build config 00:03:13.985 common/qat: not in enabled drivers build config 00:03:13.985 common/sfc_efx: not in enabled drivers build config 00:03:13.985 mempool/bucket: not in enabled drivers build config 00:03:13.985 mempool/cnxk: not in enabled drivers build config 00:03:13.985 mempool/dpaa: not in enabled drivers build config 00:03:13.985 mempool/dpaa2: not in enabled drivers build config 00:03:13.985 mempool/octeontx: not in enabled drivers build config 00:03:13.985 mempool/stack: not in enabled drivers build config 00:03:13.985 dma/cnxk: not in enabled drivers build config 00:03:13.985 dma/dpaa: not in enabled drivers build config 00:03:13.985 dma/dpaa2: not in enabled drivers build config 00:03:13.985 dma/hisilicon: not in enabled drivers build config 00:03:13.985 dma/idxd: not in enabled drivers build config 00:03:13.985 dma/ioat: not in enabled drivers build config 00:03:13.985 dma/skeleton: not in enabled drivers build config 00:03:13.985 net/af_packet: not in enabled drivers build config 00:03:13.985 net/af_xdp: not in enabled drivers build config 00:03:13.985 net/ark: not in enabled drivers build config 00:03:13.985 net/atlantic: not in enabled drivers build config 00:03:13.985 net/avp: not in enabled drivers build config 00:03:13.986 net/axgbe: not in enabled drivers build config 00:03:13.986 net/bnx2x: not in enabled drivers build config 00:03:13.986 net/bnxt: not in enabled drivers build config 00:03:13.986 net/bonding: not in enabled drivers build config 00:03:13.986 net/cnxk: not in enabled drivers build config 00:03:13.986 net/cpfl: not in enabled drivers build config 00:03:13.986 net/cxgbe: not in enabled drivers build config 00:03:13.986 net/dpaa: not in enabled drivers build config 00:03:13.986 net/dpaa2: not in enabled drivers build config 00:03:13.986 net/e1000: not in enabled drivers build config 00:03:13.986 net/ena: not in enabled drivers build config 00:03:13.986 net/enetc: not in enabled drivers build config 00:03:13.986 net/enetfec: not in enabled drivers build config 00:03:13.986 net/enic: not in enabled drivers build config 00:03:13.986 net/failsafe: not in enabled drivers build config 00:03:13.986 net/fm10k: not in enabled drivers build config 00:03:13.986 net/gve: not in enabled drivers build config 00:03:13.986 net/hinic: not in enabled drivers build config 00:03:13.986 net/hns3: not in enabled drivers build config 00:03:13.986 net/i40e: not in enabled drivers build config 00:03:13.986 net/iavf: not in enabled drivers build config 00:03:13.986 net/ice: not in enabled drivers build config 00:03:13.986 net/idpf: not in enabled drivers build config 00:03:13.986 net/igc: not in enabled drivers build config 00:03:13.986 net/ionic: not in enabled drivers build config 00:03:13.986 net/ipn3ke: not in enabled drivers build config 00:03:13.986 net/ixgbe: not in enabled drivers build config 00:03:13.986 net/mana: not in enabled drivers build config 00:03:13.986 net/memif: not in enabled drivers build config 00:03:13.986 net/mlx4: not in enabled drivers build config 00:03:13.986 net/mlx5: not in enabled drivers build config 00:03:13.986 net/mvneta: not in enabled drivers build config 00:03:13.986 net/mvpp2: not in enabled drivers build config 00:03:13.986 net/netvsc: not in enabled drivers build config 00:03:13.986 net/nfb: not in enabled drivers build config 00:03:13.986 net/nfp: not in enabled drivers build config 00:03:13.986 net/ngbe: not in enabled drivers build config 00:03:13.986 net/null: not in enabled drivers build config 00:03:13.986 net/octeontx: not in enabled drivers build config 00:03:13.986 net/octeon_ep: not in enabled drivers build config 00:03:13.986 net/pcap: not in enabled drivers build config 00:03:13.986 net/pfe: not in enabled drivers build config 00:03:13.986 net/qede: not in enabled drivers build config 00:03:13.986 net/ring: not in enabled drivers build config 00:03:13.986 net/sfc: not in enabled drivers build config 00:03:13.986 net/softnic: not in enabled drivers build config 00:03:13.986 net/tap: not in enabled drivers build config 00:03:13.986 net/thunderx: not in enabled drivers build config 00:03:13.986 net/txgbe: not in enabled drivers build config 00:03:13.986 net/vdev_netvsc: not in enabled drivers build config 00:03:13.986 net/vhost: not in enabled drivers build config 00:03:13.986 net/virtio: not in enabled drivers build config 00:03:13.986 net/vmxnet3: not in enabled drivers build config 00:03:13.986 raw/*: missing internal dependency, "rawdev" 00:03:13.986 crypto/armv8: not in enabled drivers build config 00:03:13.986 crypto/bcmfs: not in enabled drivers build config 00:03:13.986 crypto/caam_jr: not in enabled drivers build config 00:03:13.986 crypto/ccp: not in enabled drivers build config 00:03:13.986 crypto/cnxk: not in enabled drivers build config 00:03:13.986 crypto/dpaa_sec: not in enabled drivers build config 00:03:13.986 crypto/dpaa2_sec: not in enabled drivers build config 00:03:13.986 crypto/ipsec_mb: not in enabled drivers build config 00:03:13.986 crypto/mlx5: not in enabled drivers build config 00:03:13.986 crypto/mvsam: not in enabled drivers build config 00:03:13.986 crypto/nitrox: not in enabled drivers build config 00:03:13.986 crypto/null: not in enabled drivers build config 00:03:13.986 crypto/octeontx: not in enabled drivers build config 00:03:13.986 crypto/openssl: not in enabled drivers build config 00:03:13.986 crypto/scheduler: not in enabled drivers build config 00:03:13.986 crypto/uadk: not in enabled drivers build config 00:03:13.986 crypto/virtio: not in enabled drivers build config 00:03:13.986 compress/isal: not in enabled drivers build config 00:03:13.986 compress/mlx5: not in enabled drivers build config 00:03:13.986 compress/nitrox: not in enabled drivers build config 00:03:13.986 compress/octeontx: not in enabled drivers build config 00:03:13.986 compress/zlib: not in enabled drivers build config 00:03:13.986 regex/*: missing internal dependency, "regexdev" 00:03:13.986 ml/*: missing internal dependency, "mldev" 00:03:13.986 vdpa/ifc: not in enabled drivers build config 00:03:13.986 vdpa/mlx5: not in enabled drivers build config 00:03:13.986 vdpa/nfp: not in enabled drivers build config 00:03:13.986 vdpa/sfc: not in enabled drivers build config 00:03:13.986 event/*: missing internal dependency, "eventdev" 00:03:13.986 baseband/*: missing internal dependency, "bbdev" 00:03:13.986 gpu/*: missing internal dependency, "gpudev" 00:03:13.986 00:03:13.986 00:03:13.986 Build targets in project: 85 00:03:13.986 00:03:13.986 DPDK 24.03.0 00:03:13.986 00:03:13.986 User defined options 00:03:13.986 buildtype : debug 00:03:13.986 default_library : shared 00:03:13.986 libdir : lib 00:03:13.986 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:13.986 b_sanitize : address 00:03:13.986 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:13.986 c_link_args : 00:03:13.986 cpu_instruction_set: native 00:03:13.986 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:13.986 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:13.986 enable_docs : false 00:03:13.986 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:13.986 enable_kmods : false 00:03:13.986 max_lcores : 128 00:03:13.986 tests : false 00:03:13.986 00:03:13.986 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:13.986 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:13.986 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:13.986 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:13.986 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:13.986 [4/268] Linking static target lib/librte_kvargs.a 00:03:13.986 [5/268] Linking static target lib/librte_log.a 00:03:13.986 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:13.986 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:13.986 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:13.986 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:13.986 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:13.986 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:13.986 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:13.986 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:13.986 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.986 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:13.986 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:13.986 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:13.986 [18/268] Linking static target lib/librte_telemetry.a 00:03:13.986 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:13.986 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.246 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:14.246 [22/268] Linking target lib/librte_log.so.24.1 00:03:14.246 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:14.246 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:14.246 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:14.246 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:14.246 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:14.246 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:14.504 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:14.504 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:14.504 [31/268] Linking target lib/librte_kvargs.so.24.1 00:03:14.504 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:14.504 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.504 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:14.763 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:14.763 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:14.763 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:14.763 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:14.763 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:14.763 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:14.763 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:14.763 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:14.763 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:14.763 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:15.022 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:15.022 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:15.282 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:15.282 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:15.282 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:15.282 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:15.282 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:15.282 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:15.541 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:15.541 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:15.541 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:15.801 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:15.801 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:15.801 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:15.801 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:15.801 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:15.801 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:15.801 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:15.801 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:16.063 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:16.063 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:16.063 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:16.329 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:16.329 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:16.329 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:16.595 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:16.595 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:16.595 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:16.595 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:16.595 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:16.595 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:16.595 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:16.595 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:16.855 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:16.855 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:16.855 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:16.855 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:17.115 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:17.115 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:17.115 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:17.115 [85/268] Linking static target lib/librte_ring.a 00:03:17.379 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:17.380 [87/268] Linking static target lib/librte_eal.a 00:03:17.380 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:17.380 [89/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.380 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:17.380 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:17.380 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:17.380 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:17.380 [94/268] Linking static target lib/librte_mempool.a 00:03:17.643 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:17.643 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:17.643 [97/268] Linking static target lib/librte_rcu.a 00:03:17.902 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:17.902 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:17.902 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:17.903 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:17.903 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:18.162 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:18.162 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:18.162 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:18.162 [106/268] Linking static target lib/librte_net.a 00:03:18.162 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:18.162 [108/268] Linking static target lib/librte_meter.a 00:03:18.162 [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.422 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:18.422 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:18.422 [112/268] Linking static target lib/librte_mbuf.a 00:03:18.422 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:18.422 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:18.682 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.682 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.682 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.682 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:18.941 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:19.201 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:19.201 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:19.201 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:19.201 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:19.461 [124/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.461 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:19.461 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:19.461 [127/268] Linking static target lib/librte_pci.a 00:03:19.722 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:19.722 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:19.722 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:19.722 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:19.722 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:19.722 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:19.722 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:19.982 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:19.982 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:19.982 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:19.982 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.982 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:19.982 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:19.982 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:19.982 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:19.982 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:19.982 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:19.982 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:20.242 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:20.242 [147/268] Linking static target lib/librte_cmdline.a 00:03:20.502 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:20.502 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:20.502 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:20.502 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:20.502 [152/268] Linking static target lib/librte_timer.a 00:03:20.502 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:20.761 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:20.762 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:20.762 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:21.021 [157/268] Linking static target lib/librte_ethdev.a 00:03:21.021 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:21.021 [159/268] Linking static target lib/librte_hash.a 00:03:21.021 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:21.021 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:21.281 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:21.281 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.281 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:21.281 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:21.281 [166/268] Linking static target lib/librte_compressdev.a 00:03:21.281 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:21.281 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:21.281 [169/268] Linking static target lib/librte_dmadev.a 00:03:21.541 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:21.801 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:21.801 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:21.801 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.801 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:22.061 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.061 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.322 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:22.322 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:22.322 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:22.322 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:22.322 [181/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.322 [182/268] Linking static target lib/librte_cryptodev.a 00:03:22.322 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:22.322 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:22.582 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:22.582 [186/268] Linking static target lib/librte_power.a 00:03:22.582 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:22.842 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:22.842 [189/268] Linking static target lib/librte_reorder.a 00:03:22.842 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:23.102 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:23.102 [192/268] Linking static target lib/librte_security.a 00:03:23.102 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:23.363 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:23.363 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.624 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.883 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:23.884 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:23.884 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.884 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:23.884 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:24.143 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:24.143 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:24.143 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:24.403 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:24.403 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:24.403 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.662 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:24.662 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:24.662 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:24.662 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:24.922 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:24.922 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:24.922 [214/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:24.922 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:24.922 [216/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:24.922 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:24.922 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:24.922 [219/268] Linking static target drivers/librte_bus_vdev.a 00:03:24.922 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:24.922 [221/268] Linking static target drivers/librte_bus_pci.a 00:03:25.182 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:25.182 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:25.182 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:25.182 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:25.182 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.441 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.010 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:27.941 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.201 [230/268] Linking target lib/librte_eal.so.24.1 00:03:28.201 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:28.201 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:28.201 [233/268] Linking target lib/librte_pci.so.24.1 00:03:28.201 [234/268] Linking target lib/librte_ring.so.24.1 00:03:28.201 [235/268] Linking target lib/librte_timer.so.24.1 00:03:28.201 [236/268] Linking target lib/librte_meter.so.24.1 00:03:28.461 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:28.461 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:28.461 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:28.461 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:28.461 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:28.461 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:28.461 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:28.461 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:28.461 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:28.721 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:28.721 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:28.721 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:28.721 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:28.721 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:28.721 [251/268] Linking target lib/librte_compressdev.so.24.1 00:03:28.981 [252/268] Linking target lib/librte_net.so.24.1 00:03:28.981 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:28.981 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:28.981 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:28.981 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:28.981 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:28.981 [258/268] Linking target lib/librte_hash.so.24.1 00:03:28.981 [259/268] Linking target lib/librte_security.so.24.1 00:03:29.241 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:29.811 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.811 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:30.070 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:30.070 [264/268] Linking target lib/librte_power.so.24.1 00:03:30.070 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:30.070 [266/268] Linking static target lib/librte_vhost.a 00:03:32.611 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.870 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:32.870 INFO: autodetecting backend as ninja 00:03:32.870 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:50.972 CC lib/ut_mock/mock.o 00:03:50.972 CC lib/ut/ut.o 00:03:50.972 CC lib/log/log.o 00:03:50.972 CC lib/log/log_flags.o 00:03:50.972 CC lib/log/log_deprecated.o 00:03:50.972 LIB libspdk_ut_mock.a 00:03:50.972 LIB libspdk_ut.a 00:03:50.972 LIB libspdk_log.a 00:03:50.972 SO libspdk_ut_mock.so.6.0 00:03:50.973 SO libspdk_ut.so.2.0 00:03:50.973 SO libspdk_log.so.7.0 00:03:50.973 SYMLINK libspdk_ut_mock.so 00:03:50.973 SYMLINK libspdk_ut.so 00:03:50.973 SYMLINK libspdk_log.so 00:03:50.973 CC lib/dma/dma.o 00:03:50.973 CC lib/util/base64.o 00:03:50.973 CC lib/util/cpuset.o 00:03:50.973 CC lib/util/crc16.o 00:03:50.973 CC lib/util/crc32.o 00:03:50.973 CC lib/util/bit_array.o 00:03:50.973 CC lib/util/crc32c.o 00:03:50.973 CXX lib/trace_parser/trace.o 00:03:50.973 CC lib/ioat/ioat.o 00:03:50.973 CC lib/util/crc32_ieee.o 00:03:50.973 CC lib/util/crc64.o 00:03:50.973 CC lib/util/dif.o 00:03:50.973 CC lib/vfio_user/host/vfio_user_pci.o 00:03:50.973 CC lib/util/fd.o 00:03:50.973 LIB libspdk_dma.a 00:03:50.973 SO libspdk_dma.so.5.0 00:03:50.973 CC lib/util/fd_group.o 00:03:50.973 CC lib/vfio_user/host/vfio_user.o 00:03:50.973 CC lib/util/file.o 00:03:50.973 SYMLINK libspdk_dma.so 00:03:50.973 CC lib/util/hexlify.o 00:03:50.973 CC lib/util/iov.o 00:03:50.973 CC lib/util/math.o 00:03:50.973 LIB libspdk_ioat.a 00:03:50.973 SO libspdk_ioat.so.7.0 00:03:50.973 CC lib/util/net.o 00:03:50.973 CC lib/util/pipe.o 00:03:50.973 CC lib/util/strerror_tls.o 00:03:50.973 SYMLINK libspdk_ioat.so 00:03:50.973 LIB libspdk_vfio_user.a 00:03:50.973 CC lib/util/string.o 00:03:50.973 CC lib/util/uuid.o 00:03:50.973 SO libspdk_vfio_user.so.5.0 00:03:50.973 CC lib/util/xor.o 00:03:50.973 CC lib/util/zipf.o 00:03:50.973 SYMLINK libspdk_vfio_user.so 00:03:50.973 CC lib/util/md5.o 00:03:50.973 LIB libspdk_util.a 00:03:50.973 SO libspdk_util.so.10.0 00:03:50.973 SYMLINK libspdk_util.so 00:03:50.973 LIB libspdk_trace_parser.a 00:03:50.973 SO libspdk_trace_parser.so.6.0 00:03:50.973 CC lib/rdma_utils/rdma_utils.o 00:03:50.973 CC lib/json/json_parse.o 00:03:50.973 CC lib/json/json_util.o 00:03:50.973 CC lib/json/json_write.o 00:03:50.973 CC lib/idxd/idxd.o 00:03:50.973 CC lib/conf/conf.o 00:03:50.973 CC lib/rdma_provider/common.o 00:03:50.973 CC lib/env_dpdk/env.o 00:03:50.973 CC lib/vmd/vmd.o 00:03:50.973 SYMLINK libspdk_trace_parser.so 00:03:50.973 CC lib/env_dpdk/memory.o 00:03:50.973 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:50.973 CC lib/env_dpdk/pci.o 00:03:50.973 LIB libspdk_conf.a 00:03:50.973 CC lib/env_dpdk/init.o 00:03:50.973 LIB libspdk_rdma_utils.a 00:03:50.973 SO libspdk_conf.so.6.0 00:03:50.973 SO libspdk_rdma_utils.so.1.0 00:03:50.973 LIB libspdk_json.a 00:03:50.973 SYMLINK libspdk_conf.so 00:03:50.973 CC lib/vmd/led.o 00:03:50.973 SO libspdk_json.so.6.0 00:03:50.973 SYMLINK libspdk_rdma_utils.so 00:03:50.973 CC lib/env_dpdk/threads.o 00:03:50.973 LIB libspdk_rdma_provider.a 00:03:50.973 SYMLINK libspdk_json.so 00:03:50.973 CC lib/idxd/idxd_user.o 00:03:50.973 SO libspdk_rdma_provider.so.6.0 00:03:50.973 SYMLINK libspdk_rdma_provider.so 00:03:50.973 CC lib/env_dpdk/pci_ioat.o 00:03:50.973 CC lib/env_dpdk/pci_virtio.o 00:03:50.973 CC lib/env_dpdk/pci_vmd.o 00:03:50.973 CC lib/env_dpdk/pci_idxd.o 00:03:50.973 CC lib/jsonrpc/jsonrpc_server.o 00:03:50.973 CC lib/idxd/idxd_kernel.o 00:03:50.973 CC lib/env_dpdk/pci_event.o 00:03:50.973 CC lib/env_dpdk/sigbus_handler.o 00:03:50.973 CC lib/env_dpdk/pci_dpdk.o 00:03:50.973 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:50.973 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:50.973 LIB libspdk_vmd.a 00:03:50.973 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:50.973 SO libspdk_vmd.so.6.0 00:03:50.973 LIB libspdk_idxd.a 00:03:50.973 CC lib/jsonrpc/jsonrpc_client.o 00:03:50.973 SYMLINK libspdk_vmd.so 00:03:50.973 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:50.973 SO libspdk_idxd.so.12.1 00:03:50.973 SYMLINK libspdk_idxd.so 00:03:50.973 LIB libspdk_jsonrpc.a 00:03:50.973 SO libspdk_jsonrpc.so.6.0 00:03:51.233 SYMLINK libspdk_jsonrpc.so 00:03:51.492 CC lib/rpc/rpc.o 00:03:51.492 LIB libspdk_env_dpdk.a 00:03:51.751 SO libspdk_env_dpdk.so.15.0 00:03:51.751 LIB libspdk_rpc.a 00:03:51.751 SO libspdk_rpc.so.6.0 00:03:51.751 SYMLINK libspdk_env_dpdk.so 00:03:51.751 SYMLINK libspdk_rpc.so 00:03:52.320 CC lib/keyring/keyring_rpc.o 00:03:52.320 CC lib/keyring/keyring.o 00:03:52.320 CC lib/notify/notify.o 00:03:52.320 CC lib/notify/notify_rpc.o 00:03:52.320 CC lib/trace/trace.o 00:03:52.320 CC lib/trace/trace_flags.o 00:03:52.320 CC lib/trace/trace_rpc.o 00:03:52.320 LIB libspdk_notify.a 00:03:52.580 SO libspdk_notify.so.6.0 00:03:52.580 LIB libspdk_keyring.a 00:03:52.580 SYMLINK libspdk_notify.so 00:03:52.580 LIB libspdk_trace.a 00:03:52.580 SO libspdk_keyring.so.2.0 00:03:52.580 SO libspdk_trace.so.11.0 00:03:52.580 SYMLINK libspdk_keyring.so 00:03:52.580 SYMLINK libspdk_trace.so 00:03:53.149 CC lib/thread/thread.o 00:03:53.149 CC lib/thread/iobuf.o 00:03:53.149 CC lib/sock/sock.o 00:03:53.149 CC lib/sock/sock_rpc.o 00:03:53.409 LIB libspdk_sock.a 00:03:53.409 SO libspdk_sock.so.10.0 00:03:53.668 SYMLINK libspdk_sock.so 00:03:53.927 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:53.927 CC lib/nvme/nvme_ctrlr.o 00:03:53.927 CC lib/nvme/nvme_fabric.o 00:03:53.927 CC lib/nvme/nvme_ns.o 00:03:53.927 CC lib/nvme/nvme_ns_cmd.o 00:03:53.927 CC lib/nvme/nvme_pcie_common.o 00:03:53.927 CC lib/nvme/nvme_pcie.o 00:03:53.927 CC lib/nvme/nvme_qpair.o 00:03:53.927 CC lib/nvme/nvme.o 00:03:54.864 CC lib/nvme/nvme_quirks.o 00:03:54.864 CC lib/nvme/nvme_transport.o 00:03:54.864 LIB libspdk_thread.a 00:03:54.864 CC lib/nvme/nvme_discovery.o 00:03:54.864 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:54.864 SO libspdk_thread.so.10.1 00:03:54.864 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:54.864 CC lib/nvme/nvme_tcp.o 00:03:54.864 SYMLINK libspdk_thread.so 00:03:54.864 CC lib/nvme/nvme_opal.o 00:03:55.123 CC lib/nvme/nvme_io_msg.o 00:03:55.123 CC lib/nvme/nvme_poll_group.o 00:03:55.123 CC lib/nvme/nvme_zns.o 00:03:55.382 CC lib/nvme/nvme_stubs.o 00:03:55.382 CC lib/nvme/nvme_auth.o 00:03:55.382 CC lib/accel/accel.o 00:03:55.640 CC lib/blob/blobstore.o 00:03:55.640 CC lib/init/json_config.o 00:03:55.640 CC lib/init/subsystem.o 00:03:55.640 CC lib/nvme/nvme_cuse.o 00:03:55.640 CC lib/virtio/virtio.o 00:03:55.640 CC lib/nvme/nvme_rdma.o 00:03:55.899 CC lib/init/subsystem_rpc.o 00:03:55.899 CC lib/init/rpc.o 00:03:55.899 CC lib/fsdev/fsdev.o 00:03:56.158 CC lib/virtio/virtio_vhost_user.o 00:03:56.158 LIB libspdk_init.a 00:03:56.158 SO libspdk_init.so.6.0 00:03:56.158 SYMLINK libspdk_init.so 00:03:56.158 CC lib/virtio/virtio_vfio_user.o 00:03:56.158 CC lib/virtio/virtio_pci.o 00:03:56.417 CC lib/accel/accel_rpc.o 00:03:56.417 CC lib/accel/accel_sw.o 00:03:56.417 CC lib/blob/request.o 00:03:56.417 CC lib/blob/zeroes.o 00:03:56.417 LIB libspdk_virtio.a 00:03:56.417 CC lib/event/app.o 00:03:56.676 SO libspdk_virtio.so.7.0 00:03:56.676 SYMLINK libspdk_virtio.so 00:03:56.676 CC lib/event/reactor.o 00:03:56.676 CC lib/fsdev/fsdev_io.o 00:03:56.676 CC lib/event/log_rpc.o 00:03:56.676 CC lib/blob/blob_bs_dev.o 00:03:56.676 CC lib/fsdev/fsdev_rpc.o 00:03:56.676 LIB libspdk_accel.a 00:03:56.676 CC lib/event/app_rpc.o 00:03:56.676 SO libspdk_accel.so.16.0 00:03:56.941 CC lib/event/scheduler_static.o 00:03:56.941 SYMLINK libspdk_accel.so 00:03:56.941 CC lib/bdev/bdev.o 00:03:56.941 CC lib/bdev/scsi_nvme.o 00:03:56.941 CC lib/bdev/bdev_rpc.o 00:03:56.941 CC lib/bdev/bdev_zone.o 00:03:56.941 CC lib/bdev/part.o 00:03:56.941 LIB libspdk_event.a 00:03:56.941 LIB libspdk_fsdev.a 00:03:56.941 SO libspdk_event.so.14.0 00:03:57.263 SO libspdk_fsdev.so.1.0 00:03:57.263 SYMLINK libspdk_event.so 00:03:57.263 SYMLINK libspdk_fsdev.so 00:03:57.263 LIB libspdk_nvme.a 00:03:57.523 SO libspdk_nvme.so.14.0 00:03:57.523 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:57.523 SYMLINK libspdk_nvme.so 00:03:58.092 LIB libspdk_fuse_dispatcher.a 00:03:58.092 SO libspdk_fuse_dispatcher.so.1.0 00:03:58.092 SYMLINK libspdk_fuse_dispatcher.so 00:03:58.660 LIB libspdk_blob.a 00:03:58.920 SO libspdk_blob.so.11.0 00:03:58.920 SYMLINK libspdk_blob.so 00:03:59.487 CC lib/blobfs/blobfs.o 00:03:59.487 CC lib/blobfs/tree.o 00:03:59.487 CC lib/lvol/lvol.o 00:03:59.746 LIB libspdk_bdev.a 00:03:59.746 SO libspdk_bdev.so.16.0 00:03:59.746 SYMLINK libspdk_bdev.so 00:04:00.004 CC lib/nbd/nbd_rpc.o 00:04:00.004 CC lib/ftl/ftl_core.o 00:04:00.004 CC lib/ftl/ftl_init.o 00:04:00.004 CC lib/nbd/nbd.o 00:04:00.004 CC lib/ftl/ftl_layout.o 00:04:00.004 CC lib/scsi/dev.o 00:04:00.004 CC lib/nvmf/ctrlr.o 00:04:00.262 CC lib/ublk/ublk.o 00:04:00.262 LIB libspdk_blobfs.a 00:04:00.262 CC lib/ublk/ublk_rpc.o 00:04:00.262 SO libspdk_blobfs.so.10.0 00:04:00.262 LIB libspdk_lvol.a 00:04:00.262 CC lib/ftl/ftl_debug.o 00:04:00.262 SO libspdk_lvol.so.10.0 00:04:00.262 SYMLINK libspdk_blobfs.so 00:04:00.262 CC lib/ftl/ftl_io.o 00:04:00.262 CC lib/scsi/lun.o 00:04:00.519 SYMLINK libspdk_lvol.so 00:04:00.519 CC lib/scsi/port.o 00:04:00.519 CC lib/ftl/ftl_sb.o 00:04:00.519 CC lib/ftl/ftl_l2p.o 00:04:00.519 CC lib/scsi/scsi.o 00:04:00.519 LIB libspdk_nbd.a 00:04:00.519 CC lib/scsi/scsi_bdev.o 00:04:00.519 CC lib/ftl/ftl_l2p_flat.o 00:04:00.519 SO libspdk_nbd.so.7.0 00:04:00.519 CC lib/scsi/scsi_pr.o 00:04:00.519 CC lib/scsi/scsi_rpc.o 00:04:00.519 CC lib/ftl/ftl_nv_cache.o 00:04:00.519 SYMLINK libspdk_nbd.so 00:04:00.519 CC lib/ftl/ftl_band.o 00:04:00.519 CC lib/scsi/task.o 00:04:00.777 CC lib/ftl/ftl_band_ops.o 00:04:00.777 CC lib/ftl/ftl_writer.o 00:04:00.777 CC lib/ftl/ftl_rq.o 00:04:00.777 LIB libspdk_ublk.a 00:04:00.777 CC lib/ftl/ftl_reloc.o 00:04:00.777 SO libspdk_ublk.so.3.0 00:04:01.036 CC lib/ftl/ftl_l2p_cache.o 00:04:01.036 SYMLINK libspdk_ublk.so 00:04:01.036 CC lib/ftl/ftl_p2l.o 00:04:01.036 CC lib/ftl/ftl_p2l_log.o 00:04:01.036 CC lib/ftl/mngt/ftl_mngt.o 00:04:01.036 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:01.036 CC lib/nvmf/ctrlr_discovery.o 00:04:01.036 LIB libspdk_scsi.a 00:04:01.036 SO libspdk_scsi.so.9.0 00:04:01.294 SYMLINK libspdk_scsi.so 00:04:01.294 CC lib/nvmf/ctrlr_bdev.o 00:04:01.294 CC lib/nvmf/subsystem.o 00:04:01.294 CC lib/nvmf/nvmf.o 00:04:01.294 CC lib/nvmf/nvmf_rpc.o 00:04:01.294 CC lib/nvmf/transport.o 00:04:01.294 CC lib/iscsi/conn.o 00:04:01.553 CC lib/iscsi/init_grp.o 00:04:01.553 CC lib/iscsi/iscsi.o 00:04:01.553 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:01.811 CC lib/iscsi/param.o 00:04:01.811 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:01.811 CC lib/nvmf/tcp.o 00:04:01.811 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.069 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.069 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.069 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:02.069 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:02.069 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:02.327 CC lib/iscsi/portal_grp.o 00:04:02.327 CC lib/iscsi/tgt_node.o 00:04:02.328 CC lib/nvmf/stubs.o 00:04:02.328 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:02.328 CC lib/nvmf/mdns_server.o 00:04:02.328 CC lib/nvmf/rdma.o 00:04:02.586 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:02.586 CC lib/iscsi/iscsi_subsystem.o 00:04:02.586 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:02.586 CC lib/nvmf/auth.o 00:04:02.845 CC lib/iscsi/iscsi_rpc.o 00:04:02.845 CC lib/iscsi/task.o 00:04:02.845 CC lib/ftl/utils/ftl_conf.o 00:04:02.845 CC lib/vhost/vhost.o 00:04:02.845 CC lib/vhost/vhost_rpc.o 00:04:02.845 CC lib/vhost/vhost_scsi.o 00:04:02.845 CC lib/ftl/utils/ftl_md.o 00:04:03.104 CC lib/ftl/utils/ftl_mempool.o 00:04:03.104 CC lib/ftl/utils/ftl_bitmap.o 00:04:03.104 LIB libspdk_iscsi.a 00:04:03.104 SO libspdk_iscsi.so.8.0 00:04:03.363 CC lib/vhost/vhost_blk.o 00:04:03.363 CC lib/vhost/rte_vhost_user.o 00:04:03.363 SYMLINK libspdk_iscsi.so 00:04:03.363 CC lib/ftl/utils/ftl_property.o 00:04:03.363 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:03.363 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:03.363 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:03.623 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:03.623 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:03.623 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:03.623 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:03.623 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:03.623 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:03.623 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:03.883 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:03.883 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:03.883 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:03.883 CC lib/ftl/base/ftl_base_dev.o 00:04:03.883 CC lib/ftl/base/ftl_base_bdev.o 00:04:03.883 CC lib/ftl/ftl_trace.o 00:04:04.142 LIB libspdk_ftl.a 00:04:04.401 LIB libspdk_vhost.a 00:04:04.401 SO libspdk_ftl.so.9.0 00:04:04.401 SO libspdk_vhost.so.8.0 00:04:04.401 SYMLINK libspdk_vhost.so 00:04:04.661 SYMLINK libspdk_ftl.so 00:04:04.661 LIB libspdk_nvmf.a 00:04:04.921 SO libspdk_nvmf.so.19.0 00:04:05.181 SYMLINK libspdk_nvmf.so 00:04:05.441 CC module/env_dpdk/env_dpdk_rpc.o 00:04:05.441 CC module/accel/error/accel_error.o 00:04:05.441 CC module/fsdev/aio/fsdev_aio.o 00:04:05.441 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:05.441 CC module/keyring/linux/keyring.o 00:04:05.441 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:05.441 CC module/keyring/file/keyring.o 00:04:05.441 CC module/sock/posix/posix.o 00:04:05.441 CC module/scheduler/gscheduler/gscheduler.o 00:04:05.441 CC module/blob/bdev/blob_bdev.o 00:04:05.441 LIB libspdk_env_dpdk_rpc.a 00:04:05.700 SO libspdk_env_dpdk_rpc.so.6.0 00:04:05.700 SYMLINK libspdk_env_dpdk_rpc.so 00:04:05.700 CC module/keyring/file/keyring_rpc.o 00:04:05.700 CC module/keyring/linux/keyring_rpc.o 00:04:05.700 LIB libspdk_scheduler_gscheduler.a 00:04:05.700 LIB libspdk_scheduler_dpdk_governor.a 00:04:05.700 SO libspdk_scheduler_gscheduler.so.4.0 00:04:05.700 CC module/accel/error/accel_error_rpc.o 00:04:05.700 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:05.700 LIB libspdk_scheduler_dynamic.a 00:04:05.700 SO libspdk_scheduler_dynamic.so.4.0 00:04:05.700 SYMLINK libspdk_scheduler_gscheduler.so 00:04:05.700 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:05.700 LIB libspdk_keyring_linux.a 00:04:05.700 LIB libspdk_keyring_file.a 00:04:05.700 CC module/accel/ioat/accel_ioat.o 00:04:05.700 SO libspdk_keyring_linux.so.1.0 00:04:05.700 LIB libspdk_blob_bdev.a 00:04:05.700 SYMLINK libspdk_scheduler_dynamic.so 00:04:05.700 SO libspdk_keyring_file.so.2.0 00:04:05.700 CC module/accel/ioat/accel_ioat_rpc.o 00:04:05.700 LIB libspdk_accel_error.a 00:04:05.960 SO libspdk_blob_bdev.so.11.0 00:04:05.960 SO libspdk_accel_error.so.2.0 00:04:05.960 SYMLINK libspdk_keyring_file.so 00:04:05.960 SYMLINK libspdk_keyring_linux.so 00:04:05.960 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:05.960 CC module/fsdev/aio/linux_aio_mgr.o 00:04:05.960 SYMLINK libspdk_blob_bdev.so 00:04:05.960 CC module/accel/iaa/accel_iaa.o 00:04:05.960 SYMLINK libspdk_accel_error.so 00:04:05.960 CC module/accel/iaa/accel_iaa_rpc.o 00:04:05.960 CC module/accel/dsa/accel_dsa.o 00:04:05.960 CC module/accel/dsa/accel_dsa_rpc.o 00:04:05.960 LIB libspdk_accel_ioat.a 00:04:05.960 SO libspdk_accel_ioat.so.6.0 00:04:05.960 SYMLINK libspdk_accel_ioat.so 00:04:06.219 LIB libspdk_accel_iaa.a 00:04:06.219 CC module/bdev/delay/vbdev_delay.o 00:04:06.219 SO libspdk_accel_iaa.so.3.0 00:04:06.219 CC module/bdev/error/vbdev_error.o 00:04:06.219 LIB libspdk_fsdev_aio.a 00:04:06.219 LIB libspdk_accel_dsa.a 00:04:06.219 CC module/bdev/gpt/gpt.o 00:04:06.219 SYMLINK libspdk_accel_iaa.so 00:04:06.219 SO libspdk_fsdev_aio.so.1.0 00:04:06.219 SO libspdk_accel_dsa.so.5.0 00:04:06.219 CC module/bdev/malloc/bdev_malloc.o 00:04:06.219 CC module/bdev/lvol/vbdev_lvol.o 00:04:06.219 SYMLINK libspdk_fsdev_aio.so 00:04:06.219 CC module/blobfs/bdev/blobfs_bdev.o 00:04:06.219 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:06.219 SYMLINK libspdk_accel_dsa.so 00:04:06.219 LIB libspdk_sock_posix.a 00:04:06.479 SO libspdk_sock_posix.so.6.0 00:04:06.479 CC module/bdev/gpt/vbdev_gpt.o 00:04:06.479 SYMLINK libspdk_sock_posix.so 00:04:06.479 CC module/bdev/null/bdev_null.o 00:04:06.479 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:06.479 CC module/bdev/error/vbdev_error_rpc.o 00:04:06.479 CC module/bdev/nvme/bdev_nvme.o 00:04:06.479 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:06.479 CC module/bdev/passthru/vbdev_passthru.o 00:04:06.739 LIB libspdk_blobfs_bdev.a 00:04:06.739 LIB libspdk_bdev_error.a 00:04:06.739 SO libspdk_blobfs_bdev.so.6.0 00:04:06.739 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:06.739 SO libspdk_bdev_error.so.6.0 00:04:06.739 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:06.739 LIB libspdk_bdev_gpt.a 00:04:06.739 SYMLINK libspdk_blobfs_bdev.so 00:04:06.739 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:06.739 SO libspdk_bdev_gpt.so.6.0 00:04:06.739 SYMLINK libspdk_bdev_error.so 00:04:06.739 CC module/bdev/null/bdev_null_rpc.o 00:04:06.739 LIB libspdk_bdev_delay.a 00:04:06.739 CC module/bdev/nvme/nvme_rpc.o 00:04:06.739 SYMLINK libspdk_bdev_gpt.so 00:04:06.739 SO libspdk_bdev_delay.so.6.0 00:04:06.739 LIB libspdk_bdev_lvol.a 00:04:06.739 LIB libspdk_bdev_malloc.a 00:04:06.739 SO libspdk_bdev_lvol.so.6.0 00:04:06.739 SYMLINK libspdk_bdev_delay.so 00:04:06.739 SO libspdk_bdev_malloc.so.6.0 00:04:06.998 SYMLINK libspdk_bdev_malloc.so 00:04:06.998 LIB libspdk_bdev_passthru.a 00:04:06.998 CC module/bdev/raid/bdev_raid.o 00:04:06.998 SYMLINK libspdk_bdev_lvol.so 00:04:06.998 LIB libspdk_bdev_null.a 00:04:06.998 SO libspdk_bdev_passthru.so.6.0 00:04:06.998 SO libspdk_bdev_null.so.6.0 00:04:06.998 CC module/bdev/split/vbdev_split.o 00:04:06.998 SYMLINK libspdk_bdev_passthru.so 00:04:06.998 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:06.998 CC module/bdev/nvme/bdev_mdns_client.o 00:04:06.998 SYMLINK libspdk_bdev_null.so 00:04:06.998 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:06.998 CC module/bdev/aio/bdev_aio.o 00:04:06.998 CC module/bdev/ftl/bdev_ftl.o 00:04:07.257 CC module/bdev/iscsi/bdev_iscsi.o 00:04:07.257 CC module/bdev/aio/bdev_aio_rpc.o 00:04:07.257 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:07.257 CC module/bdev/split/vbdev_split_rpc.o 00:04:07.257 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:07.257 CC module/bdev/nvme/vbdev_opal.o 00:04:07.257 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:07.257 LIB libspdk_bdev_zone_block.a 00:04:07.257 LIB libspdk_bdev_split.a 00:04:07.257 LIB libspdk_bdev_aio.a 00:04:07.517 SO libspdk_bdev_zone_block.so.6.0 00:04:07.517 SO libspdk_bdev_split.so.6.0 00:04:07.517 SO libspdk_bdev_aio.so.6.0 00:04:07.517 SYMLINK libspdk_bdev_zone_block.so 00:04:07.517 SYMLINK libspdk_bdev_aio.so 00:04:07.517 CC module/bdev/raid/bdev_raid_rpc.o 00:04:07.517 SYMLINK libspdk_bdev_split.so 00:04:07.517 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:07.517 CC module/bdev/raid/bdev_raid_sb.o 00:04:07.517 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:07.517 LIB libspdk_bdev_ftl.a 00:04:07.517 LIB libspdk_bdev_iscsi.a 00:04:07.517 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:07.517 SO libspdk_bdev_ftl.so.6.0 00:04:07.517 SO libspdk_bdev_iscsi.so.6.0 00:04:07.517 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:07.517 SYMLINK libspdk_bdev_ftl.so 00:04:07.517 SYMLINK libspdk_bdev_iscsi.so 00:04:07.517 CC module/bdev/raid/raid0.o 00:04:07.517 CC module/bdev/raid/raid1.o 00:04:07.776 CC module/bdev/raid/concat.o 00:04:07.776 CC module/bdev/raid/raid5f.o 00:04:08.036 LIB libspdk_bdev_virtio.a 00:04:08.036 SO libspdk_bdev_virtio.so.6.0 00:04:08.295 SYMLINK libspdk_bdev_virtio.so 00:04:08.295 LIB libspdk_bdev_raid.a 00:04:08.295 SO libspdk_bdev_raid.so.6.0 00:04:08.557 SYMLINK libspdk_bdev_raid.so 00:04:09.146 LIB libspdk_bdev_nvme.a 00:04:09.146 SO libspdk_bdev_nvme.so.7.0 00:04:09.146 SYMLINK libspdk_bdev_nvme.so 00:04:09.715 CC module/event/subsystems/keyring/keyring.o 00:04:09.715 CC module/event/subsystems/iobuf/iobuf.o 00:04:09.715 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:09.715 CC module/event/subsystems/sock/sock.o 00:04:09.715 CC module/event/subsystems/vmd/vmd.o 00:04:09.715 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:09.715 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:09.715 CC module/event/subsystems/fsdev/fsdev.o 00:04:09.715 CC module/event/subsystems/scheduler/scheduler.o 00:04:09.975 LIB libspdk_event_sock.a 00:04:09.975 LIB libspdk_event_fsdev.a 00:04:09.975 LIB libspdk_event_vmd.a 00:04:09.975 LIB libspdk_event_vhost_blk.a 00:04:09.975 LIB libspdk_event_iobuf.a 00:04:09.975 LIB libspdk_event_scheduler.a 00:04:09.975 LIB libspdk_event_keyring.a 00:04:09.975 SO libspdk_event_sock.so.5.0 00:04:09.975 SO libspdk_event_fsdev.so.1.0 00:04:09.975 SO libspdk_event_vmd.so.6.0 00:04:09.975 SO libspdk_event_vhost_blk.so.3.0 00:04:09.975 SO libspdk_event_scheduler.so.4.0 00:04:09.975 SO libspdk_event_iobuf.so.3.0 00:04:09.975 SO libspdk_event_keyring.so.1.0 00:04:09.975 SYMLINK libspdk_event_fsdev.so 00:04:09.975 SYMLINK libspdk_event_sock.so 00:04:09.975 SYMLINK libspdk_event_vmd.so 00:04:09.975 SYMLINK libspdk_event_vhost_blk.so 00:04:09.975 SYMLINK libspdk_event_scheduler.so 00:04:09.975 SYMLINK libspdk_event_keyring.so 00:04:09.975 SYMLINK libspdk_event_iobuf.so 00:04:10.543 CC module/event/subsystems/accel/accel.o 00:04:10.543 LIB libspdk_event_accel.a 00:04:10.543 SO libspdk_event_accel.so.6.0 00:04:10.803 SYMLINK libspdk_event_accel.so 00:04:11.063 CC module/event/subsystems/bdev/bdev.o 00:04:11.322 LIB libspdk_event_bdev.a 00:04:11.322 SO libspdk_event_bdev.so.6.0 00:04:11.322 SYMLINK libspdk_event_bdev.so 00:04:11.582 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:11.582 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:11.582 CC module/event/subsystems/nbd/nbd.o 00:04:11.582 CC module/event/subsystems/scsi/scsi.o 00:04:11.842 CC module/event/subsystems/ublk/ublk.o 00:04:11.842 LIB libspdk_event_nbd.a 00:04:11.842 LIB libspdk_event_scsi.a 00:04:11.842 SO libspdk_event_scsi.so.6.0 00:04:11.842 SO libspdk_event_nbd.so.6.0 00:04:11.842 LIB libspdk_event_ublk.a 00:04:11.842 LIB libspdk_event_nvmf.a 00:04:11.842 SO libspdk_event_ublk.so.3.0 00:04:11.842 SYMLINK libspdk_event_nbd.so 00:04:11.842 SYMLINK libspdk_event_scsi.so 00:04:11.842 SYMLINK libspdk_event_ublk.so 00:04:11.842 SO libspdk_event_nvmf.so.6.0 00:04:12.102 SYMLINK libspdk_event_nvmf.so 00:04:12.361 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:12.361 CC module/event/subsystems/iscsi/iscsi.o 00:04:12.361 LIB libspdk_event_vhost_scsi.a 00:04:12.361 LIB libspdk_event_iscsi.a 00:04:12.361 SO libspdk_event_vhost_scsi.so.3.0 00:04:12.621 SO libspdk_event_iscsi.so.6.0 00:04:12.621 SYMLINK libspdk_event_vhost_scsi.so 00:04:12.621 SYMLINK libspdk_event_iscsi.so 00:04:12.881 SO libspdk.so.6.0 00:04:12.881 SYMLINK libspdk.so 00:04:13.141 TEST_HEADER include/spdk/accel.h 00:04:13.141 TEST_HEADER include/spdk/accel_module.h 00:04:13.141 CXX app/trace/trace.o 00:04:13.141 TEST_HEADER include/spdk/assert.h 00:04:13.141 TEST_HEADER include/spdk/barrier.h 00:04:13.141 CC test/rpc_client/rpc_client_test.o 00:04:13.141 TEST_HEADER include/spdk/base64.h 00:04:13.141 TEST_HEADER include/spdk/bdev.h 00:04:13.141 CC app/trace_record/trace_record.o 00:04:13.141 TEST_HEADER include/spdk/bdev_module.h 00:04:13.141 TEST_HEADER include/spdk/bdev_zone.h 00:04:13.141 TEST_HEADER include/spdk/bit_array.h 00:04:13.141 TEST_HEADER include/spdk/bit_pool.h 00:04:13.141 TEST_HEADER include/spdk/blob_bdev.h 00:04:13.141 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:13.141 TEST_HEADER include/spdk/blobfs.h 00:04:13.141 TEST_HEADER include/spdk/blob.h 00:04:13.141 TEST_HEADER include/spdk/conf.h 00:04:13.141 TEST_HEADER include/spdk/config.h 00:04:13.141 CC app/nvmf_tgt/nvmf_main.o 00:04:13.141 TEST_HEADER include/spdk/cpuset.h 00:04:13.141 TEST_HEADER include/spdk/crc16.h 00:04:13.141 TEST_HEADER include/spdk/crc32.h 00:04:13.141 TEST_HEADER include/spdk/crc64.h 00:04:13.141 TEST_HEADER include/spdk/dif.h 00:04:13.141 TEST_HEADER include/spdk/dma.h 00:04:13.141 TEST_HEADER include/spdk/endian.h 00:04:13.141 TEST_HEADER include/spdk/env_dpdk.h 00:04:13.141 TEST_HEADER include/spdk/env.h 00:04:13.141 TEST_HEADER include/spdk/event.h 00:04:13.141 TEST_HEADER include/spdk/fd_group.h 00:04:13.141 TEST_HEADER include/spdk/fd.h 00:04:13.141 TEST_HEADER include/spdk/file.h 00:04:13.141 TEST_HEADER include/spdk/fsdev.h 00:04:13.141 TEST_HEADER include/spdk/fsdev_module.h 00:04:13.141 TEST_HEADER include/spdk/ftl.h 00:04:13.141 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:13.141 CC test/thread/poller_perf/poller_perf.o 00:04:13.141 TEST_HEADER include/spdk/gpt_spec.h 00:04:13.141 TEST_HEADER include/spdk/hexlify.h 00:04:13.141 TEST_HEADER include/spdk/histogram_data.h 00:04:13.141 TEST_HEADER include/spdk/idxd.h 00:04:13.141 TEST_HEADER include/spdk/idxd_spec.h 00:04:13.141 TEST_HEADER include/spdk/init.h 00:04:13.141 TEST_HEADER include/spdk/ioat.h 00:04:13.141 TEST_HEADER include/spdk/ioat_spec.h 00:04:13.141 TEST_HEADER include/spdk/iscsi_spec.h 00:04:13.141 TEST_HEADER include/spdk/json.h 00:04:13.141 CC examples/util/zipf/zipf.o 00:04:13.141 TEST_HEADER include/spdk/jsonrpc.h 00:04:13.141 TEST_HEADER include/spdk/keyring.h 00:04:13.141 TEST_HEADER include/spdk/keyring_module.h 00:04:13.141 TEST_HEADER include/spdk/likely.h 00:04:13.141 TEST_HEADER include/spdk/log.h 00:04:13.141 TEST_HEADER include/spdk/lvol.h 00:04:13.141 TEST_HEADER include/spdk/md5.h 00:04:13.141 TEST_HEADER include/spdk/memory.h 00:04:13.141 TEST_HEADER include/spdk/mmio.h 00:04:13.141 TEST_HEADER include/spdk/nbd.h 00:04:13.141 TEST_HEADER include/spdk/net.h 00:04:13.141 TEST_HEADER include/spdk/notify.h 00:04:13.141 TEST_HEADER include/spdk/nvme.h 00:04:13.141 TEST_HEADER include/spdk/nvme_intel.h 00:04:13.141 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:13.141 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:13.141 TEST_HEADER include/spdk/nvme_spec.h 00:04:13.141 CC test/app/bdev_svc/bdev_svc.o 00:04:13.141 TEST_HEADER include/spdk/nvme_zns.h 00:04:13.141 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:13.141 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:13.141 CC test/dma/test_dma/test_dma.o 00:04:13.141 TEST_HEADER include/spdk/nvmf.h 00:04:13.141 TEST_HEADER include/spdk/nvmf_spec.h 00:04:13.141 TEST_HEADER include/spdk/nvmf_transport.h 00:04:13.141 TEST_HEADER include/spdk/opal.h 00:04:13.141 TEST_HEADER include/spdk/opal_spec.h 00:04:13.141 TEST_HEADER include/spdk/pci_ids.h 00:04:13.141 TEST_HEADER include/spdk/pipe.h 00:04:13.141 TEST_HEADER include/spdk/queue.h 00:04:13.141 TEST_HEADER include/spdk/reduce.h 00:04:13.141 TEST_HEADER include/spdk/rpc.h 00:04:13.141 TEST_HEADER include/spdk/scheduler.h 00:04:13.141 TEST_HEADER include/spdk/scsi.h 00:04:13.141 TEST_HEADER include/spdk/scsi_spec.h 00:04:13.141 TEST_HEADER include/spdk/sock.h 00:04:13.141 TEST_HEADER include/spdk/stdinc.h 00:04:13.141 TEST_HEADER include/spdk/string.h 00:04:13.141 TEST_HEADER include/spdk/thread.h 00:04:13.141 TEST_HEADER include/spdk/trace.h 00:04:13.141 TEST_HEADER include/spdk/trace_parser.h 00:04:13.141 TEST_HEADER include/spdk/tree.h 00:04:13.141 CC test/env/mem_callbacks/mem_callbacks.o 00:04:13.141 TEST_HEADER include/spdk/ublk.h 00:04:13.141 TEST_HEADER include/spdk/util.h 00:04:13.141 TEST_HEADER include/spdk/uuid.h 00:04:13.141 TEST_HEADER include/spdk/version.h 00:04:13.141 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:13.141 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:13.141 TEST_HEADER include/spdk/vhost.h 00:04:13.141 TEST_HEADER include/spdk/vmd.h 00:04:13.141 TEST_HEADER include/spdk/xor.h 00:04:13.141 TEST_HEADER include/spdk/zipf.h 00:04:13.141 CXX test/cpp_headers/accel.o 00:04:13.401 LINK nvmf_tgt 00:04:13.401 LINK rpc_client_test 00:04:13.401 LINK poller_perf 00:04:13.401 LINK zipf 00:04:13.401 LINK spdk_trace_record 00:04:13.401 LINK bdev_svc 00:04:13.401 CXX test/cpp_headers/accel_module.o 00:04:13.401 CXX test/cpp_headers/assert.o 00:04:13.401 CXX test/cpp_headers/barrier.o 00:04:13.401 LINK spdk_trace 00:04:13.401 CXX test/cpp_headers/base64.o 00:04:13.401 CXX test/cpp_headers/bdev.o 00:04:13.661 CC examples/ioat/perf/perf.o 00:04:13.661 CXX test/cpp_headers/bdev_module.o 00:04:13.661 CC test/app/histogram_perf/histogram_perf.o 00:04:13.661 CC test/app/jsoncat/jsoncat.o 00:04:13.661 LINK test_dma 00:04:13.661 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:13.661 CC app/iscsi_tgt/iscsi_tgt.o 00:04:13.661 CC test/app/stub/stub.o 00:04:13.661 LINK mem_callbacks 00:04:13.661 CC app/spdk_tgt/spdk_tgt.o 00:04:13.920 LINK histogram_perf 00:04:13.920 LINK jsoncat 00:04:13.920 CXX test/cpp_headers/bdev_zone.o 00:04:13.920 LINK ioat_perf 00:04:13.920 LINK iscsi_tgt 00:04:13.920 LINK stub 00:04:13.920 CXX test/cpp_headers/bit_array.o 00:04:13.920 CC test/env/vtophys/vtophys.o 00:04:13.920 CC app/spdk_lspci/spdk_lspci.o 00:04:13.920 LINK spdk_tgt 00:04:14.180 CC examples/ioat/verify/verify.o 00:04:14.180 CC app/spdk_nvme_perf/perf.o 00:04:14.180 LINK vtophys 00:04:14.180 CXX test/cpp_headers/bit_pool.o 00:04:14.180 CXX test/cpp_headers/blob_bdev.o 00:04:14.180 LINK nvme_fuzz 00:04:14.180 LINK spdk_lspci 00:04:14.180 CC examples/vmd/lsvmd/lsvmd.o 00:04:14.180 CC app/spdk_nvme_identify/identify.o 00:04:14.180 CXX test/cpp_headers/blobfs_bdev.o 00:04:14.180 LINK lsvmd 00:04:14.180 LINK verify 00:04:14.180 CC test/env/memory/memory_ut.o 00:04:14.440 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:14.440 CC test/env/pci/pci_ut.o 00:04:14.440 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:14.440 CXX test/cpp_headers/blobfs.o 00:04:14.440 CC examples/idxd/perf/perf.o 00:04:14.440 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:14.440 CC examples/vmd/led/led.o 00:04:14.440 CXX test/cpp_headers/blob.o 00:04:14.440 LINK env_dpdk_post_init 00:04:14.700 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:14.700 CXX test/cpp_headers/conf.o 00:04:14.700 LINK pci_ut 00:04:14.700 LINK led 00:04:14.700 LINK idxd_perf 00:04:14.700 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:14.700 CXX test/cpp_headers/config.o 00:04:14.959 CXX test/cpp_headers/cpuset.o 00:04:14.959 CC app/spdk_nvme_discover/discovery_aer.o 00:04:14.959 LINK interrupt_tgt 00:04:14.959 LINK spdk_nvme_perf 00:04:14.959 CXX test/cpp_headers/crc16.o 00:04:14.959 LINK spdk_nvme_identify 00:04:14.959 CC app/spdk_top/spdk_top.o 00:04:14.959 LINK vhost_fuzz 00:04:14.959 CC examples/thread/thread/thread_ex.o 00:04:15.219 CXX test/cpp_headers/crc32.o 00:04:15.219 LINK spdk_nvme_discover 00:04:15.219 CC app/vhost/vhost.o 00:04:15.219 CC examples/sock/hello_world/hello_sock.o 00:04:15.219 CXX test/cpp_headers/crc64.o 00:04:15.219 CC app/spdk_dd/spdk_dd.o 00:04:15.219 LINK thread 00:04:15.219 CC app/fio/nvme/fio_plugin.o 00:04:15.219 LINK memory_ut 00:04:15.479 LINK vhost 00:04:15.479 CXX test/cpp_headers/dif.o 00:04:15.479 CC test/event/event_perf/event_perf.o 00:04:15.479 LINK hello_sock 00:04:15.479 CXX test/cpp_headers/dma.o 00:04:15.479 LINK event_perf 00:04:15.479 CC app/fio/bdev/fio_plugin.o 00:04:15.479 LINK spdk_dd 00:04:15.739 CXX test/cpp_headers/endian.o 00:04:15.739 CC test/nvme/aer/aer.o 00:04:15.739 CC test/accel/dif/dif.o 00:04:15.739 CC examples/accel/perf/accel_perf.o 00:04:15.739 CC test/event/reactor/reactor.o 00:04:15.739 CXX test/cpp_headers/env_dpdk.o 00:04:15.999 LINK spdk_nvme 00:04:15.999 LINK reactor 00:04:15.999 LINK spdk_top 00:04:15.999 CXX test/cpp_headers/env.o 00:04:15.999 CC examples/blob/hello_world/hello_blob.o 00:04:15.999 LINK aer 00:04:15.999 LINK spdk_bdev 00:04:15.999 CXX test/cpp_headers/event.o 00:04:15.999 CC examples/blob/cli/blobcli.o 00:04:15.999 CC test/event/reactor_perf/reactor_perf.o 00:04:16.258 CC test/event/app_repeat/app_repeat.o 00:04:16.258 LINK iscsi_fuzz 00:04:16.258 LINK hello_blob 00:04:16.258 CC test/nvme/reset/reset.o 00:04:16.258 CC test/nvme/sgl/sgl.o 00:04:16.258 LINK reactor_perf 00:04:16.258 CXX test/cpp_headers/fd_group.o 00:04:16.258 LINK accel_perf 00:04:16.258 LINK app_repeat 00:04:16.258 CXX test/cpp_headers/fd.o 00:04:16.517 CC test/nvme/e2edp/nvme_dp.o 00:04:16.517 LINK reset 00:04:16.517 LINK sgl 00:04:16.517 LINK dif 00:04:16.517 CC examples/nvme/hello_world/hello_world.o 00:04:16.517 CC test/blobfs/mkfs/mkfs.o 00:04:16.517 CXX test/cpp_headers/file.o 00:04:16.517 CC test/event/scheduler/scheduler.o 00:04:16.517 LINK blobcli 00:04:16.517 CC test/lvol/esnap/esnap.o 00:04:16.777 CXX test/cpp_headers/fsdev.o 00:04:16.777 CC test/nvme/overhead/overhead.o 00:04:16.777 LINK mkfs 00:04:16.777 LINK hello_world 00:04:16.777 CC test/nvme/err_injection/err_injection.o 00:04:16.777 LINK nvme_dp 00:04:16.777 CC test/nvme/startup/startup.o 00:04:16.777 LINK scheduler 00:04:16.777 CC test/nvme/reserve/reserve.o 00:04:16.777 CXX test/cpp_headers/fsdev_module.o 00:04:16.777 LINK err_injection 00:04:16.777 LINK startup 00:04:16.777 CC examples/nvme/reconnect/reconnect.o 00:04:17.036 CC test/nvme/simple_copy/simple_copy.o 00:04:17.036 CC test/nvme/connect_stress/connect_stress.o 00:04:17.036 LINK overhead 00:04:17.036 CXX test/cpp_headers/ftl.o 00:04:17.036 CC test/nvme/boot_partition/boot_partition.o 00:04:17.036 LINK reserve 00:04:17.036 CC test/nvme/compliance/nvme_compliance.o 00:04:17.036 LINK connect_stress 00:04:17.036 LINK boot_partition 00:04:17.295 CXX test/cpp_headers/fuse_dispatcher.o 00:04:17.295 CC test/nvme/fused_ordering/fused_ordering.o 00:04:17.295 LINK simple_copy 00:04:17.295 CC test/bdev/bdevio/bdevio.o 00:04:17.295 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:17.295 LINK reconnect 00:04:17.295 CXX test/cpp_headers/gpt_spec.o 00:04:17.295 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:17.295 CXX test/cpp_headers/hexlify.o 00:04:17.295 CC test/nvme/fdp/fdp.o 00:04:17.295 LINK fused_ordering 00:04:17.295 CXX test/cpp_headers/histogram_data.o 00:04:17.555 LINK nvme_compliance 00:04:17.555 CXX test/cpp_headers/idxd.o 00:04:17.555 LINK doorbell_aers 00:04:17.555 CC test/nvme/cuse/cuse.o 00:04:17.555 LINK bdevio 00:04:17.555 CXX test/cpp_headers/idxd_spec.o 00:04:17.555 CXX test/cpp_headers/init.o 00:04:17.555 CC examples/nvme/arbitration/arbitration.o 00:04:17.555 LINK fdp 00:04:17.815 CC examples/bdev/hello_world/hello_bdev.o 00:04:17.815 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:17.815 LINK nvme_manage 00:04:17.815 CXX test/cpp_headers/ioat.o 00:04:17.815 CXX test/cpp_headers/ioat_spec.o 00:04:17.815 CC examples/nvme/hotplug/hotplug.o 00:04:17.815 CXX test/cpp_headers/iscsi_spec.o 00:04:17.815 LINK hello_bdev 00:04:17.815 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:17.815 CC examples/bdev/bdevperf/bdevperf.o 00:04:18.075 LINK arbitration 00:04:18.075 LINK hello_fsdev 00:04:18.075 CC examples/nvme/abort/abort.o 00:04:18.075 CXX test/cpp_headers/json.o 00:04:18.075 LINK hotplug 00:04:18.075 LINK cmb_copy 00:04:18.075 CXX test/cpp_headers/jsonrpc.o 00:04:18.075 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:18.075 CXX test/cpp_headers/keyring.o 00:04:18.075 CXX test/cpp_headers/keyring_module.o 00:04:18.338 CXX test/cpp_headers/likely.o 00:04:18.338 CXX test/cpp_headers/log.o 00:04:18.338 CXX test/cpp_headers/lvol.o 00:04:18.338 CXX test/cpp_headers/md5.o 00:04:18.338 CXX test/cpp_headers/memory.o 00:04:18.338 LINK pmr_persistence 00:04:18.338 CXX test/cpp_headers/mmio.o 00:04:18.338 LINK abort 00:04:18.338 CXX test/cpp_headers/nbd.o 00:04:18.338 CXX test/cpp_headers/net.o 00:04:18.338 CXX test/cpp_headers/notify.o 00:04:18.338 CXX test/cpp_headers/nvme.o 00:04:18.338 CXX test/cpp_headers/nvme_intel.o 00:04:18.598 CXX test/cpp_headers/nvme_ocssd.o 00:04:18.598 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:18.598 CXX test/cpp_headers/nvme_spec.o 00:04:18.598 CXX test/cpp_headers/nvme_zns.o 00:04:18.598 CXX test/cpp_headers/nvmf_cmd.o 00:04:18.598 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:18.598 CXX test/cpp_headers/nvmf.o 00:04:18.598 CXX test/cpp_headers/nvmf_spec.o 00:04:18.598 CXX test/cpp_headers/nvmf_transport.o 00:04:18.598 CXX test/cpp_headers/opal.o 00:04:18.598 CXX test/cpp_headers/opal_spec.o 00:04:18.598 CXX test/cpp_headers/pci_ids.o 00:04:18.857 LINK bdevperf 00:04:18.857 CXX test/cpp_headers/pipe.o 00:04:18.857 CXX test/cpp_headers/queue.o 00:04:18.857 LINK cuse 00:04:18.857 CXX test/cpp_headers/reduce.o 00:04:18.857 CXX test/cpp_headers/rpc.o 00:04:18.857 CXX test/cpp_headers/scheduler.o 00:04:18.857 CXX test/cpp_headers/scsi.o 00:04:18.857 CXX test/cpp_headers/scsi_spec.o 00:04:18.857 CXX test/cpp_headers/sock.o 00:04:18.857 CXX test/cpp_headers/stdinc.o 00:04:18.857 CXX test/cpp_headers/string.o 00:04:18.857 CXX test/cpp_headers/thread.o 00:04:18.857 CXX test/cpp_headers/trace.o 00:04:18.857 CXX test/cpp_headers/trace_parser.o 00:04:18.857 CXX test/cpp_headers/tree.o 00:04:19.117 CXX test/cpp_headers/ublk.o 00:04:19.117 CXX test/cpp_headers/util.o 00:04:19.117 CXX test/cpp_headers/uuid.o 00:04:19.117 CXX test/cpp_headers/version.o 00:04:19.117 CXX test/cpp_headers/vfio_user_pci.o 00:04:19.117 CXX test/cpp_headers/vfio_user_spec.o 00:04:19.117 CXX test/cpp_headers/vhost.o 00:04:19.117 CXX test/cpp_headers/vmd.o 00:04:19.117 CXX test/cpp_headers/xor.o 00:04:19.117 CC examples/nvmf/nvmf/nvmf.o 00:04:19.117 CXX test/cpp_headers/zipf.o 00:04:19.376 LINK nvmf 00:04:22.680 LINK esnap 00:04:22.940 00:04:22.940 real 1m20.432s 00:04:22.940 user 6m59.398s 00:04:22.940 sys 1m35.312s 00:04:22.940 12:21:34 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:22.940 12:21:34 make -- common/autotest_common.sh@10 -- $ set +x 00:04:22.940 ************************************ 00:04:22.940 END TEST make 00:04:22.940 ************************************ 00:04:22.940 12:21:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:22.940 12:21:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:22.940 12:21:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:22.940 12:21:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.940 12:21:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:22.940 12:21:34 -- pm/common@44 -- $ pid=5453 00:04:22.940 12:21:34 -- pm/common@50 -- $ kill -TERM 5453 00:04:22.940 12:21:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.940 12:21:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:22.940 12:21:34 -- pm/common@44 -- $ pid=5455 00:04:22.940 12:21:34 -- pm/common@50 -- $ kill -TERM 5455 00:04:22.940 12:21:34 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:22.940 12:21:34 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:22.940 12:21:34 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:23.200 12:21:34 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:23.200 12:21:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.200 12:21:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.200 12:21:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.200 12:21:34 -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.200 12:21:34 -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.200 12:21:34 -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.200 12:21:34 -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.200 12:21:34 -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.200 12:21:34 -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.200 12:21:34 -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.200 12:21:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.200 12:21:34 -- scripts/common.sh@344 -- # case "$op" in 00:04:23.200 12:21:34 -- scripts/common.sh@345 -- # : 1 00:04:23.200 12:21:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.200 12:21:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.200 12:21:34 -- scripts/common.sh@365 -- # decimal 1 00:04:23.200 12:21:34 -- scripts/common.sh@353 -- # local d=1 00:04:23.200 12:21:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.200 12:21:34 -- scripts/common.sh@355 -- # echo 1 00:04:23.200 12:21:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.200 12:21:34 -- scripts/common.sh@366 -- # decimal 2 00:04:23.200 12:21:34 -- scripts/common.sh@353 -- # local d=2 00:04:23.200 12:21:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.200 12:21:34 -- scripts/common.sh@355 -- # echo 2 00:04:23.200 12:21:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.200 12:21:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.200 12:21:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.200 12:21:34 -- scripts/common.sh@368 -- # return 0 00:04:23.200 12:21:34 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.200 12:21:34 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:23.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.200 --rc genhtml_branch_coverage=1 00:04:23.200 --rc genhtml_function_coverage=1 00:04:23.200 --rc genhtml_legend=1 00:04:23.200 --rc geninfo_all_blocks=1 00:04:23.200 --rc geninfo_unexecuted_blocks=1 00:04:23.200 00:04:23.200 ' 00:04:23.200 12:21:34 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:23.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.200 --rc genhtml_branch_coverage=1 00:04:23.200 --rc genhtml_function_coverage=1 00:04:23.200 --rc genhtml_legend=1 00:04:23.200 --rc geninfo_all_blocks=1 00:04:23.200 --rc geninfo_unexecuted_blocks=1 00:04:23.200 00:04:23.200 ' 00:04:23.200 12:21:34 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:23.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.200 --rc genhtml_branch_coverage=1 00:04:23.200 --rc genhtml_function_coverage=1 00:04:23.200 --rc genhtml_legend=1 00:04:23.200 --rc geninfo_all_blocks=1 00:04:23.200 --rc geninfo_unexecuted_blocks=1 00:04:23.200 00:04:23.200 ' 00:04:23.200 12:21:34 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:23.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.200 --rc genhtml_branch_coverage=1 00:04:23.200 --rc genhtml_function_coverage=1 00:04:23.200 --rc genhtml_legend=1 00:04:23.200 --rc geninfo_all_blocks=1 00:04:23.200 --rc geninfo_unexecuted_blocks=1 00:04:23.200 00:04:23.200 ' 00:04:23.200 12:21:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.200 12:21:34 -- nvmf/common.sh@7 -- # uname -s 00:04:23.200 12:21:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.200 12:21:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.200 12:21:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.200 12:21:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.200 12:21:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.200 12:21:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.200 12:21:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.200 12:21:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.200 12:21:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.200 12:21:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.200 12:21:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:47745ecc-8228-4d31-b22b-27f2eabba6fc 00:04:23.200 12:21:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=47745ecc-8228-4d31-b22b-27f2eabba6fc 00:04:23.200 12:21:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.200 12:21:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.200 12:21:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.200 12:21:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.201 12:21:34 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.201 12:21:34 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.201 12:21:34 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.201 12:21:34 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.201 12:21:34 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.201 12:21:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.201 12:21:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.201 12:21:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.201 12:21:34 -- paths/export.sh@5 -- # export PATH 00:04:23.201 12:21:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.201 12:21:34 -- nvmf/common.sh@51 -- # : 0 00:04:23.201 12:21:34 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:23.201 12:21:34 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:23.201 12:21:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.201 12:21:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.201 12:21:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.201 12:21:34 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:23.201 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:23.201 12:21:34 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:23.201 12:21:34 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:23.201 12:21:34 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:23.201 12:21:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:23.201 12:21:34 -- spdk/autotest.sh@32 -- # uname -s 00:04:23.201 12:21:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:23.201 12:21:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:23.201 12:21:34 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.201 12:21:34 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:23.201 12:21:34 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.201 12:21:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:23.201 12:21:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:23.201 12:21:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:23.201 12:21:35 -- spdk/autotest.sh@48 -- # udevadm_pid=54371 00:04:23.201 12:21:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:23.201 12:21:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:23.201 12:21:35 -- pm/common@17 -- # local monitor 00:04:23.201 12:21:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.201 12:21:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.201 12:21:35 -- pm/common@25 -- # sleep 1 00:04:23.201 12:21:35 -- pm/common@21 -- # date +%s 00:04:23.201 12:21:35 -- pm/common@21 -- # date +%s 00:04:23.201 12:21:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727698895 00:04:23.201 12:21:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727698895 00:04:23.201 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727698895_collect-cpu-load.pm.log 00:04:23.201 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727698895_collect-vmstat.pm.log 00:04:24.582 12:21:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:24.582 12:21:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:24.582 12:21:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.582 12:21:36 -- common/autotest_common.sh@10 -- # set +x 00:04:24.582 12:21:36 -- spdk/autotest.sh@59 -- # create_test_list 00:04:24.582 12:21:36 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:24.582 12:21:36 -- common/autotest_common.sh@10 -- # set +x 00:04:24.582 12:21:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:24.582 12:21:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:24.582 12:21:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:24.582 12:21:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:24.582 12:21:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:24.582 12:21:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:24.582 12:21:36 -- common/autotest_common.sh@1455 -- # uname 00:04:24.582 12:21:36 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:24.582 12:21:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:24.582 12:21:36 -- common/autotest_common.sh@1475 -- # uname 00:04:24.582 12:21:36 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:24.582 12:21:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:24.582 12:21:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:24.582 lcov: LCOV version 1.15 00:04:24.582 12:21:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:39.476 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:39.476 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:54.376 12:22:04 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:54.376 12:22:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.376 12:22:04 -- common/autotest_common.sh@10 -- # set +x 00:04:54.376 12:22:04 -- spdk/autotest.sh@78 -- # rm -f 00:04:54.376 12:22:04 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:54.376 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.376 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:54.376 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:54.376 12:22:05 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:54.376 12:22:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:54.376 12:22:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:54.376 12:22:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:54.376 12:22:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:54.376 12:22:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:54.376 12:22:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:54.376 12:22:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.376 12:22:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:54.376 12:22:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:54.376 12:22:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:54.376 12:22:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:54.376 12:22:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:54.376 12:22:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:54.376 12:22:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:54.376 12:22:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:54.376 12:22:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:54.376 12:22:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:54.376 12:22:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:54.376 12:22:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:54.376 12:22:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:54.376 12:22:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:54.376 12:22:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:54.376 12:22:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:54.376 12:22:05 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:54.376 12:22:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.376 12:22:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.376 12:22:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:54.376 12:22:05 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:54.376 12:22:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:54.376 No valid GPT data, bailing 00:04:54.376 12:22:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.376 12:22:05 -- scripts/common.sh@394 -- # pt= 00:04:54.376 12:22:05 -- scripts/common.sh@395 -- # return 1 00:04:54.376 12:22:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:54.376 1+0 records in 00:04:54.376 1+0 records out 00:04:54.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454495 s, 231 MB/s 00:04:54.376 12:22:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.376 12:22:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.376 12:22:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:54.376 12:22:05 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:54.376 12:22:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:54.376 No valid GPT data, bailing 00:04:54.376 12:22:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:54.376 12:22:05 -- scripts/common.sh@394 -- # pt= 00:04:54.376 12:22:05 -- scripts/common.sh@395 -- # return 1 00:04:54.376 12:22:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:54.376 1+0 records in 00:04:54.376 1+0 records out 00:04:54.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00657544 s, 159 MB/s 00:04:54.376 12:22:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.376 12:22:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.376 12:22:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:54.376 12:22:05 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:54.376 12:22:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:54.376 No valid GPT data, bailing 00:04:54.376 12:22:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:54.376 12:22:05 -- scripts/common.sh@394 -- # pt= 00:04:54.376 12:22:05 -- scripts/common.sh@395 -- # return 1 00:04:54.376 12:22:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:54.376 1+0 records in 00:04:54.376 1+0 records out 00:04:54.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634519 s, 165 MB/s 00:04:54.376 12:22:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.376 12:22:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.376 12:22:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:54.376 12:22:05 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:54.376 12:22:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:54.376 No valid GPT data, bailing 00:04:54.376 12:22:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:54.376 12:22:06 -- scripts/common.sh@394 -- # pt= 00:04:54.376 12:22:06 -- scripts/common.sh@395 -- # return 1 00:04:54.376 12:22:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:54.376 1+0 records in 00:04:54.376 1+0 records out 00:04:54.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00683986 s, 153 MB/s 00:04:54.376 12:22:06 -- spdk/autotest.sh@105 -- # sync 00:04:54.376 12:22:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:54.376 12:22:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:54.376 12:22:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:57.671 12:22:08 -- spdk/autotest.sh@111 -- # uname -s 00:04:57.671 12:22:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:57.671 12:22:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:57.671 12:22:08 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:57.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.931 Hugepages 00:04:57.931 node hugesize free / total 00:04:57.931 node0 1048576kB 0 / 0 00:04:57.931 node0 2048kB 0 / 0 00:04:57.931 00:04:57.931 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:58.190 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:58.190 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:58.190 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:58.190 12:22:10 -- spdk/autotest.sh@117 -- # uname -s 00:04:58.190 12:22:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:58.190 12:22:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:58.190 12:22:10 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.129 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.389 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.389 12:22:11 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:00.329 12:22:12 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:00.329 12:22:12 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:00.329 12:22:12 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:00.329 12:22:12 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:00.329 12:22:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:00.329 12:22:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:00.329 12:22:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.329 12:22:12 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:00.329 12:22:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:00.329 12:22:12 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:00.329 12:22:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:00.329 12:22:12 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.898 Waiting for block devices as requested 00:05:01.158 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:01.158 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:01.158 12:22:12 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:01.158 12:22:12 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:01.158 12:22:12 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:01.158 12:22:12 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:01.158 12:22:12 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:01.158 12:22:12 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:01.158 12:22:12 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:01.158 12:22:13 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:01.158 12:22:13 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:01.158 12:22:13 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:01.158 12:22:13 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:01.158 12:22:13 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:01.158 12:22:13 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:01.158 12:22:13 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:01.158 12:22:13 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:01.158 12:22:13 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:01.158 12:22:13 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:01.158 12:22:13 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:01.158 12:22:13 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:01.158 12:22:13 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:01.158 12:22:13 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:01.158 12:22:13 -- common/autotest_common.sh@1541 -- # continue 00:05:01.158 12:22:13 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:01.158 12:22:13 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:01.158 12:22:13 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:01.158 12:22:13 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:01.158 12:22:13 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:01.158 12:22:13 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:01.418 12:22:13 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:01.418 12:22:13 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:01.418 12:22:13 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:01.418 12:22:13 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:01.418 12:22:13 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:01.418 12:22:13 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:01.418 12:22:13 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:01.418 12:22:13 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:01.418 12:22:13 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:01.418 12:22:13 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:01.418 12:22:13 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:01.418 12:22:13 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:01.418 12:22:13 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:01.418 12:22:13 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:01.418 12:22:13 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:01.418 12:22:13 -- common/autotest_common.sh@1541 -- # continue 00:05:01.418 12:22:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:01.418 12:22:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.418 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:05:01.418 12:22:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:01.418 12:22:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.418 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:05:01.418 12:22:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.357 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.357 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.357 12:22:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:02.357 12:22:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.357 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:05:02.617 12:22:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:02.617 12:22:14 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:02.617 12:22:14 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:02.617 12:22:14 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:02.617 12:22:14 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:02.617 12:22:14 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:02.617 12:22:14 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:02.617 12:22:14 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:02.617 12:22:14 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:02.617 12:22:14 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:02.617 12:22:14 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.617 12:22:14 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:02.617 12:22:14 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:02.617 12:22:14 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:02.617 12:22:14 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:02.617 12:22:14 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:02.617 12:22:14 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:02.617 12:22:14 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:02.617 12:22:14 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:02.617 12:22:14 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:02.617 12:22:14 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:02.617 12:22:14 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:02.617 12:22:14 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:02.617 12:22:14 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:02.617 12:22:14 -- common/autotest_common.sh@1570 -- # return 0 00:05:02.617 12:22:14 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:02.617 12:22:14 -- common/autotest_common.sh@1578 -- # return 0 00:05:02.617 12:22:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:02.617 12:22:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:02.617 12:22:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:02.617 12:22:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:02.617 12:22:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:02.617 12:22:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.617 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:05:02.617 12:22:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:02.617 12:22:14 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:02.617 12:22:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.617 12:22:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.617 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:05:02.617 ************************************ 00:05:02.617 START TEST env 00:05:02.617 ************************************ 00:05:02.617 12:22:14 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:02.877 * Looking for test storage... 00:05:02.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:02.877 12:22:14 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:02.877 12:22:14 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:02.877 12:22:14 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.877 12:22:14 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.877 12:22:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.877 12:22:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.877 12:22:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.877 12:22:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.877 12:22:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.877 12:22:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.877 12:22:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.877 12:22:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.877 12:22:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.877 12:22:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.877 12:22:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.877 12:22:14 env -- scripts/common.sh@344 -- # case "$op" in 00:05:02.877 12:22:14 env -- scripts/common.sh@345 -- # : 1 00:05:02.877 12:22:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.877 12:22:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.877 12:22:14 env -- scripts/common.sh@365 -- # decimal 1 00:05:02.877 12:22:14 env -- scripts/common.sh@353 -- # local d=1 00:05:02.877 12:22:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.877 12:22:14 env -- scripts/common.sh@355 -- # echo 1 00:05:02.877 12:22:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.877 12:22:14 env -- scripts/common.sh@366 -- # decimal 2 00:05:02.877 12:22:14 env -- scripts/common.sh@353 -- # local d=2 00:05:02.877 12:22:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.877 12:22:14 env -- scripts/common.sh@355 -- # echo 2 00:05:02.877 12:22:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.877 12:22:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.877 12:22:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.877 12:22:14 env -- scripts/common.sh@368 -- # return 0 00:05:02.877 12:22:14 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.877 12:22:14 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.877 --rc genhtml_branch_coverage=1 00:05:02.877 --rc genhtml_function_coverage=1 00:05:02.877 --rc genhtml_legend=1 00:05:02.877 --rc geninfo_all_blocks=1 00:05:02.877 --rc geninfo_unexecuted_blocks=1 00:05:02.877 00:05:02.877 ' 00:05:02.877 12:22:14 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.877 --rc genhtml_branch_coverage=1 00:05:02.877 --rc genhtml_function_coverage=1 00:05:02.877 --rc genhtml_legend=1 00:05:02.877 --rc geninfo_all_blocks=1 00:05:02.877 --rc geninfo_unexecuted_blocks=1 00:05:02.877 00:05:02.877 ' 00:05:02.878 12:22:14 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.878 --rc genhtml_branch_coverage=1 00:05:02.878 --rc genhtml_function_coverage=1 00:05:02.878 --rc genhtml_legend=1 00:05:02.878 --rc geninfo_all_blocks=1 00:05:02.878 --rc geninfo_unexecuted_blocks=1 00:05:02.878 00:05:02.878 ' 00:05:02.878 12:22:14 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.878 --rc genhtml_branch_coverage=1 00:05:02.878 --rc genhtml_function_coverage=1 00:05:02.878 --rc genhtml_legend=1 00:05:02.878 --rc geninfo_all_blocks=1 00:05:02.878 --rc geninfo_unexecuted_blocks=1 00:05:02.878 00:05:02.878 ' 00:05:02.878 12:22:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:02.878 12:22:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.878 12:22:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.878 12:22:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.878 ************************************ 00:05:02.878 START TEST env_memory 00:05:02.878 ************************************ 00:05:02.878 12:22:14 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:02.878 00:05:02.878 00:05:02.878 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.878 http://cunit.sourceforge.net/ 00:05:02.878 00:05:02.878 00:05:02.878 Suite: memory 00:05:02.878 Test: alloc and free memory map ...[2024-09-30 12:22:14.728255] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:02.878 passed 00:05:02.878 Test: mem map translation ...[2024-09-30 12:22:14.771931] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:02.878 [2024-09-30 12:22:14.771977] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:02.878 [2024-09-30 12:22:14.772034] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:02.878 [2024-09-30 12:22:14.772053] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:03.137 passed 00:05:03.137 Test: mem map registration ...[2024-09-30 12:22:14.835857] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:03.137 [2024-09-30 12:22:14.835899] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:03.137 passed 00:05:03.137 Test: mem map adjacent registrations ...passed 00:05:03.137 00:05:03.137 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.137 suites 1 1 n/a 0 0 00:05:03.137 tests 4 4 4 0 0 00:05:03.137 asserts 152 152 152 0 n/a 00:05:03.137 00:05:03.137 Elapsed time = 0.231 seconds 00:05:03.137 00:05:03.137 real 0m0.282s 00:05:03.137 user 0m0.244s 00:05:03.137 sys 0m0.027s 00:05:03.137 12:22:14 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.137 12:22:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:03.137 ************************************ 00:05:03.137 END TEST env_memory 00:05:03.137 ************************************ 00:05:03.137 12:22:14 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:03.137 12:22:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.137 12:22:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.137 12:22:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.137 ************************************ 00:05:03.137 START TEST env_vtophys 00:05:03.137 ************************************ 00:05:03.137 12:22:14 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:03.395 EAL: lib.eal log level changed from notice to debug 00:05:03.396 EAL: Detected lcore 0 as core 0 on socket 0 00:05:03.396 EAL: Detected lcore 1 as core 0 on socket 0 00:05:03.396 EAL: Detected lcore 2 as core 0 on socket 0 00:05:03.396 EAL: Detected lcore 3 as core 0 on socket 0 00:05:03.396 EAL: Detected lcore 4 as core 0 on socket 0 00:05:03.396 EAL: Detected lcore 5 as core 0 on socket 0 00:05:03.396 EAL: Detected lcore 6 as core 0 on socket 0 00:05:03.396 EAL: Detected lcore 7 as core 0 on socket 0 00:05:03.396 EAL: Detected lcore 8 as core 0 on socket 0 00:05:03.396 EAL: Detected lcore 9 as core 0 on socket 0 00:05:03.396 EAL: Maximum logical cores by configuration: 128 00:05:03.396 EAL: Detected CPU lcores: 10 00:05:03.396 EAL: Detected NUMA nodes: 1 00:05:03.396 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:03.396 EAL: Detected shared linkage of DPDK 00:05:03.396 EAL: No shared files mode enabled, IPC will be disabled 00:05:03.396 EAL: Selected IOVA mode 'PA' 00:05:03.396 EAL: Probing VFIO support... 00:05:03.396 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:03.396 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:03.396 EAL: Ask a virtual area of 0x2e000 bytes 00:05:03.396 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:03.396 EAL: Setting up physically contiguous memory... 00:05:03.396 EAL: Setting maximum number of open files to 524288 00:05:03.396 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:03.396 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:03.396 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.396 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:03.396 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.396 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.396 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:03.396 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:03.396 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.396 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:03.396 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.396 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.396 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:03.396 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:03.396 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.396 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:03.396 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.396 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.396 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:03.396 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:03.396 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.396 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:03.396 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.396 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.396 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:03.396 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:03.396 EAL: Hugepages will be freed exactly as allocated. 00:05:03.396 EAL: No shared files mode enabled, IPC is disabled 00:05:03.396 EAL: No shared files mode enabled, IPC is disabled 00:05:03.396 EAL: TSC frequency is ~2290000 KHz 00:05:03.396 EAL: Main lcore 0 is ready (tid=7f6cdaedfa40;cpuset=[0]) 00:05:03.396 EAL: Trying to obtain current memory policy. 00:05:03.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.396 EAL: Restoring previous memory policy: 0 00:05:03.396 EAL: request: mp_malloc_sync 00:05:03.396 EAL: No shared files mode enabled, IPC is disabled 00:05:03.396 EAL: Heap on socket 0 was expanded by 2MB 00:05:03.396 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:03.396 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:03.396 EAL: Mem event callback 'spdk:(nil)' registered 00:05:03.396 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:03.396 00:05:03.396 00:05:03.396 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.396 http://cunit.sourceforge.net/ 00:05:03.396 00:05:03.396 00:05:03.396 Suite: components_suite 00:05:03.963 Test: vtophys_malloc_test ...passed 00:05:03.963 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:03.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.963 EAL: Restoring previous memory policy: 4 00:05:03.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.963 EAL: request: mp_malloc_sync 00:05:03.963 EAL: No shared files mode enabled, IPC is disabled 00:05:03.963 EAL: Heap on socket 0 was expanded by 4MB 00:05:03.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.963 EAL: request: mp_malloc_sync 00:05:03.963 EAL: No shared files mode enabled, IPC is disabled 00:05:03.963 EAL: Heap on socket 0 was shrunk by 4MB 00:05:03.963 EAL: Trying to obtain current memory policy. 00:05:03.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.963 EAL: Restoring previous memory policy: 4 00:05:03.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.963 EAL: request: mp_malloc_sync 00:05:03.963 EAL: No shared files mode enabled, IPC is disabled 00:05:03.963 EAL: Heap on socket 0 was expanded by 6MB 00:05:03.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.963 EAL: request: mp_malloc_sync 00:05:03.963 EAL: No shared files mode enabled, IPC is disabled 00:05:03.963 EAL: Heap on socket 0 was shrunk by 6MB 00:05:03.963 EAL: Trying to obtain current memory policy. 00:05:03.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.963 EAL: Restoring previous memory policy: 4 00:05:03.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.963 EAL: request: mp_malloc_sync 00:05:03.963 EAL: No shared files mode enabled, IPC is disabled 00:05:03.963 EAL: Heap on socket 0 was expanded by 10MB 00:05:03.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.963 EAL: request: mp_malloc_sync 00:05:03.963 EAL: No shared files mode enabled, IPC is disabled 00:05:03.963 EAL: Heap on socket 0 was shrunk by 10MB 00:05:03.963 EAL: Trying to obtain current memory policy. 00:05:03.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.963 EAL: Restoring previous memory policy: 4 00:05:03.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.963 EAL: request: mp_malloc_sync 00:05:03.963 EAL: No shared files mode enabled, IPC is disabled 00:05:03.963 EAL: Heap on socket 0 was expanded by 18MB 00:05:03.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.963 EAL: request: mp_malloc_sync 00:05:03.963 EAL: No shared files mode enabled, IPC is disabled 00:05:03.963 EAL: Heap on socket 0 was shrunk by 18MB 00:05:03.963 EAL: Trying to obtain current memory policy. 00:05:03.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.963 EAL: Restoring previous memory policy: 4 00:05:03.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.963 EAL: request: mp_malloc_sync 00:05:03.963 EAL: No shared files mode enabled, IPC is disabled 00:05:03.963 EAL: Heap on socket 0 was expanded by 34MB 00:05:04.222 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.222 EAL: request: mp_malloc_sync 00:05:04.222 EAL: No shared files mode enabled, IPC is disabled 00:05:04.222 EAL: Heap on socket 0 was shrunk by 34MB 00:05:04.222 EAL: Trying to obtain current memory policy. 00:05:04.222 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.222 EAL: Restoring previous memory policy: 4 00:05:04.222 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.222 EAL: request: mp_malloc_sync 00:05:04.222 EAL: No shared files mode enabled, IPC is disabled 00:05:04.222 EAL: Heap on socket 0 was expanded by 66MB 00:05:04.222 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.222 EAL: request: mp_malloc_sync 00:05:04.222 EAL: No shared files mode enabled, IPC is disabled 00:05:04.222 EAL: Heap on socket 0 was shrunk by 66MB 00:05:04.480 EAL: Trying to obtain current memory policy. 00:05:04.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.480 EAL: Restoring previous memory policy: 4 00:05:04.480 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.480 EAL: request: mp_malloc_sync 00:05:04.480 EAL: No shared files mode enabled, IPC is disabled 00:05:04.480 EAL: Heap on socket 0 was expanded by 130MB 00:05:04.738 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.738 EAL: request: mp_malloc_sync 00:05:04.738 EAL: No shared files mode enabled, IPC is disabled 00:05:04.738 EAL: Heap on socket 0 was shrunk by 130MB 00:05:04.997 EAL: Trying to obtain current memory policy. 00:05:04.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.997 EAL: Restoring previous memory policy: 4 00:05:04.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.997 EAL: request: mp_malloc_sync 00:05:04.997 EAL: No shared files mode enabled, IPC is disabled 00:05:04.997 EAL: Heap on socket 0 was expanded by 258MB 00:05:05.563 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.563 EAL: request: mp_malloc_sync 00:05:05.563 EAL: No shared files mode enabled, IPC is disabled 00:05:05.563 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.130 EAL: Trying to obtain current memory policy. 00:05:06.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.130 EAL: Restoring previous memory policy: 4 00:05:06.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.130 EAL: request: mp_malloc_sync 00:05:06.130 EAL: No shared files mode enabled, IPC is disabled 00:05:06.130 EAL: Heap on socket 0 was expanded by 514MB 00:05:07.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.331 EAL: request: mp_malloc_sync 00:05:07.331 EAL: No shared files mode enabled, IPC is disabled 00:05:07.331 EAL: Heap on socket 0 was shrunk by 514MB 00:05:08.272 EAL: Trying to obtain current memory policy. 00:05:08.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.530 EAL: Restoring previous memory policy: 4 00:05:08.530 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.530 EAL: request: mp_malloc_sync 00:05:08.530 EAL: No shared files mode enabled, IPC is disabled 00:05:08.530 EAL: Heap on socket 0 was expanded by 1026MB 00:05:10.432 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.701 EAL: request: mp_malloc_sync 00:05:10.701 EAL: No shared files mode enabled, IPC is disabled 00:05:10.701 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:12.095 passed 00:05:12.095 00:05:12.095 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.095 suites 1 1 n/a 0 0 00:05:12.095 tests 2 2 2 0 0 00:05:12.095 asserts 5866 5866 5866 0 n/a 00:05:12.095 00:05:12.095 Elapsed time = 8.675 seconds 00:05:12.095 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.095 EAL: request: mp_malloc_sync 00:05:12.095 EAL: No shared files mode enabled, IPC is disabled 00:05:12.095 EAL: Heap on socket 0 was shrunk by 2MB 00:05:12.095 EAL: No shared files mode enabled, IPC is disabled 00:05:12.095 EAL: No shared files mode enabled, IPC is disabled 00:05:12.095 EAL: No shared files mode enabled, IPC is disabled 00:05:12.353 00:05:12.353 real 0m9.001s 00:05:12.353 user 0m7.619s 00:05:12.353 sys 0m1.219s 00:05:12.353 12:22:23 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.353 12:22:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:12.353 ************************************ 00:05:12.353 END TEST env_vtophys 00:05:12.353 ************************************ 00:05:12.353 12:22:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.353 12:22:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.353 12:22:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.353 12:22:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.353 ************************************ 00:05:12.353 START TEST env_pci 00:05:12.353 ************************************ 00:05:12.353 12:22:24 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.353 00:05:12.353 00:05:12.353 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.353 http://cunit.sourceforge.net/ 00:05:12.353 00:05:12.353 00:05:12.353 Suite: pci 00:05:12.353 Test: pci_hook ...[2024-09-30 12:22:24.111610] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56681 has claimed it 00:05:12.353 passed 00:05:12.353 00:05:12.353 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.353 suites 1 1 n/a 0 0 00:05:12.353 tests 1 1 1 0 0 00:05:12.353 asserts 25 25 25 0 n/a 00:05:12.353 00:05:12.353 Elapsed time = 0.006 seconds 00:05:12.353 EAL: Cannot find device (10000:00:01.0) 00:05:12.353 EAL: Failed to attach device on primary process 00:05:12.353 00:05:12.353 real 0m0.113s 00:05:12.353 user 0m0.050s 00:05:12.353 sys 0m0.061s 00:05:12.353 12:22:24 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.353 12:22:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:12.353 ************************************ 00:05:12.353 END TEST env_pci 00:05:12.353 ************************************ 00:05:12.353 12:22:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.353 12:22:24 env -- env/env.sh@15 -- # uname 00:05:12.353 12:22:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.353 12:22:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.353 12:22:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.353 12:22:24 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:12.353 12:22:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.353 12:22:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.611 ************************************ 00:05:12.611 START TEST env_dpdk_post_init 00:05:12.611 ************************************ 00:05:12.611 12:22:24 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.611 EAL: Detected CPU lcores: 10 00:05:12.611 EAL: Detected NUMA nodes: 1 00:05:12.611 EAL: Detected shared linkage of DPDK 00:05:12.611 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.611 EAL: Selected IOVA mode 'PA' 00:05:12.612 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.612 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:12.612 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:12.869 Starting DPDK initialization... 00:05:12.869 Starting SPDK post initialization... 00:05:12.869 SPDK NVMe probe 00:05:12.869 Attaching to 0000:00:10.0 00:05:12.869 Attaching to 0000:00:11.0 00:05:12.869 Attached to 0000:00:10.0 00:05:12.869 Attached to 0000:00:11.0 00:05:12.869 Cleaning up... 00:05:12.869 00:05:12.869 real 0m0.294s 00:05:12.869 user 0m0.103s 00:05:12.869 sys 0m0.091s 00:05:12.869 12:22:24 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.869 12:22:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.869 ************************************ 00:05:12.869 END TEST env_dpdk_post_init 00:05:12.869 ************************************ 00:05:12.869 12:22:24 env -- env/env.sh@26 -- # uname 00:05:12.869 12:22:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:12.869 12:22:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.869 12:22:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.869 12:22:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.869 12:22:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.869 ************************************ 00:05:12.869 START TEST env_mem_callbacks 00:05:12.869 ************************************ 00:05:12.870 12:22:24 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.870 EAL: Detected CPU lcores: 10 00:05:12.870 EAL: Detected NUMA nodes: 1 00:05:12.870 EAL: Detected shared linkage of DPDK 00:05:12.870 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.870 EAL: Selected IOVA mode 'PA' 00:05:13.127 00:05:13.127 00:05:13.127 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.127 http://cunit.sourceforge.net/ 00:05:13.127 00:05:13.127 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.127 00:05:13.127 Suite: memory 00:05:13.127 Test: test ... 00:05:13.127 register 0x200000200000 2097152 00:05:13.127 malloc 3145728 00:05:13.127 register 0x200000400000 4194304 00:05:13.127 buf 0x2000004fffc0 len 3145728 PASSED 00:05:13.127 malloc 64 00:05:13.128 buf 0x2000004ffec0 len 64 PASSED 00:05:13.128 malloc 4194304 00:05:13.128 register 0x200000800000 6291456 00:05:13.128 buf 0x2000009fffc0 len 4194304 PASSED 00:05:13.128 free 0x2000004fffc0 3145728 00:05:13.128 free 0x2000004ffec0 64 00:05:13.128 unregister 0x200000400000 4194304 PASSED 00:05:13.128 free 0x2000009fffc0 4194304 00:05:13.128 unregister 0x200000800000 6291456 PASSED 00:05:13.128 malloc 8388608 00:05:13.128 register 0x200000400000 10485760 00:05:13.128 buf 0x2000005fffc0 len 8388608 PASSED 00:05:13.128 free 0x2000005fffc0 8388608 00:05:13.128 unregister 0x200000400000 10485760 PASSED 00:05:13.128 passed 00:05:13.128 00:05:13.128 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.128 suites 1 1 n/a 0 0 00:05:13.128 tests 1 1 1 0 0 00:05:13.128 asserts 15 15 15 0 n/a 00:05:13.128 00:05:13.128 Elapsed time = 0.080 seconds 00:05:13.128 00:05:13.128 real 0m0.276s 00:05:13.128 user 0m0.103s 00:05:13.128 sys 0m0.072s 00:05:13.128 12:22:24 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.128 12:22:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:13.128 ************************************ 00:05:13.128 END TEST env_mem_callbacks 00:05:13.128 ************************************ 00:05:13.128 00:05:13.128 real 0m10.534s 00:05:13.128 user 0m8.355s 00:05:13.128 sys 0m1.813s 00:05:13.128 12:22:24 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.128 12:22:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.128 ************************************ 00:05:13.128 END TEST env 00:05:13.128 ************************************ 00:05:13.128 12:22:25 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:13.128 12:22:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.128 12:22:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.128 12:22:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.128 ************************************ 00:05:13.128 START TEST rpc 00:05:13.128 ************************************ 00:05:13.128 12:22:25 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:13.386 * Looking for test storage... 00:05:13.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:13.386 12:22:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.386 12:22:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.386 12:22:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.386 12:22:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.386 12:22:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.386 12:22:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.386 12:22:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.386 12:22:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.386 12:22:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.386 12:22:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.386 12:22:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.386 12:22:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:13.386 12:22:25 rpc -- scripts/common.sh@345 -- # : 1 00:05:13.386 12:22:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.386 12:22:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.386 12:22:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:13.386 12:22:25 rpc -- scripts/common.sh@353 -- # local d=1 00:05:13.386 12:22:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.386 12:22:25 rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.386 12:22:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.386 12:22:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.386 12:22:25 rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.386 12:22:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.386 12:22:25 rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.386 12:22:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.386 12:22:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.386 12:22:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.386 12:22:25 rpc -- scripts/common.sh@368 -- # return 0 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:13.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.386 --rc genhtml_branch_coverage=1 00:05:13.386 --rc genhtml_function_coverage=1 00:05:13.386 --rc genhtml_legend=1 00:05:13.386 --rc geninfo_all_blocks=1 00:05:13.386 --rc geninfo_unexecuted_blocks=1 00:05:13.386 00:05:13.386 ' 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:13.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.386 --rc genhtml_branch_coverage=1 00:05:13.386 --rc genhtml_function_coverage=1 00:05:13.386 --rc genhtml_legend=1 00:05:13.386 --rc geninfo_all_blocks=1 00:05:13.386 --rc geninfo_unexecuted_blocks=1 00:05:13.386 00:05:13.386 ' 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:13.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.386 --rc genhtml_branch_coverage=1 00:05:13.386 --rc genhtml_function_coverage=1 00:05:13.386 --rc genhtml_legend=1 00:05:13.386 --rc geninfo_all_blocks=1 00:05:13.386 --rc geninfo_unexecuted_blocks=1 00:05:13.386 00:05:13.386 ' 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:13.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.386 --rc genhtml_branch_coverage=1 00:05:13.386 --rc genhtml_function_coverage=1 00:05:13.386 --rc genhtml_legend=1 00:05:13.386 --rc geninfo_all_blocks=1 00:05:13.386 --rc geninfo_unexecuted_blocks=1 00:05:13.386 00:05:13.386 ' 00:05:13.386 12:22:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56808 00:05:13.386 12:22:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:13.386 12:22:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.386 12:22:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56808 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@831 -- # '[' -z 56808 ']' 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.386 12:22:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.643 [2024-09-30 12:22:25.353047] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:13.644 [2024-09-30 12:22:25.353161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56808 ] 00:05:13.644 [2024-09-30 12:22:25.521449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.901 [2024-09-30 12:22:25.721922] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:13.901 [2024-09-30 12:22:25.721982] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56808' to capture a snapshot of events at runtime. 00:05:13.901 [2024-09-30 12:22:25.721992] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.901 [2024-09-30 12:22:25.722002] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.901 [2024-09-30 12:22:25.722010] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56808 for offline analysis/debug. 00:05:13.901 [2024-09-30 12:22:25.722049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.838 12:22:26 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.838 12:22:26 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:14.838 12:22:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.838 12:22:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.838 12:22:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:14.838 12:22:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:14.838 12:22:26 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.838 12:22:26 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.838 12:22:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.838 ************************************ 00:05:14.838 START TEST rpc_integrity 00:05:14.838 ************************************ 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:14.838 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.838 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.838 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.838 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.838 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.838 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:14.838 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.838 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.838 { 00:05:14.838 "name": "Malloc0", 00:05:14.838 "aliases": [ 00:05:14.838 "511fa2f2-686d-4ac3-a50c-ebf82f528498" 00:05:14.838 ], 00:05:14.838 "product_name": "Malloc disk", 00:05:14.838 "block_size": 512, 00:05:14.838 "num_blocks": 16384, 00:05:14.838 "uuid": "511fa2f2-686d-4ac3-a50c-ebf82f528498", 00:05:14.838 "assigned_rate_limits": { 00:05:14.838 "rw_ios_per_sec": 0, 00:05:14.838 "rw_mbytes_per_sec": 0, 00:05:14.838 "r_mbytes_per_sec": 0, 00:05:14.838 "w_mbytes_per_sec": 0 00:05:14.838 }, 00:05:14.838 "claimed": false, 00:05:14.838 "zoned": false, 00:05:14.838 "supported_io_types": { 00:05:14.838 "read": true, 00:05:14.838 "write": true, 00:05:14.838 "unmap": true, 00:05:14.838 "flush": true, 00:05:14.838 "reset": true, 00:05:14.838 "nvme_admin": false, 00:05:14.838 "nvme_io": false, 00:05:14.838 "nvme_io_md": false, 00:05:14.838 "write_zeroes": true, 00:05:14.838 "zcopy": true, 00:05:14.838 "get_zone_info": false, 00:05:14.838 "zone_management": false, 00:05:14.838 "zone_append": false, 00:05:14.838 "compare": false, 00:05:14.838 "compare_and_write": false, 00:05:14.838 "abort": true, 00:05:14.838 "seek_hole": false, 00:05:14.838 "seek_data": false, 00:05:14.838 "copy": true, 00:05:14.838 "nvme_iov_md": false 00:05:14.838 }, 00:05:14.838 "memory_domains": [ 00:05:14.838 { 00:05:14.838 "dma_device_id": "system", 00:05:14.838 "dma_device_type": 1 00:05:14.838 }, 00:05:14.838 { 00:05:14.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.838 "dma_device_type": 2 00:05:14.838 } 00:05:14.838 ], 00:05:14.838 "driver_specific": {} 00:05:14.838 } 00:05:14.838 ]' 00:05:14.838 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.838 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.838 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.838 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.098 [2024-09-30 12:22:26.737316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:15.098 [2024-09-30 12:22:26.737384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.098 [2024-09-30 12:22:26.737405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:15.098 [2024-09-30 12:22:26.737416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.098 [2024-09-30 12:22:26.739510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.098 [2024-09-30 12:22:26.739553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.098 Passthru0 00:05:15.098 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.098 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.098 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.098 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.098 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.098 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.098 { 00:05:15.098 "name": "Malloc0", 00:05:15.098 "aliases": [ 00:05:15.098 "511fa2f2-686d-4ac3-a50c-ebf82f528498" 00:05:15.098 ], 00:05:15.098 "product_name": "Malloc disk", 00:05:15.098 "block_size": 512, 00:05:15.098 "num_blocks": 16384, 00:05:15.098 "uuid": "511fa2f2-686d-4ac3-a50c-ebf82f528498", 00:05:15.098 "assigned_rate_limits": { 00:05:15.098 "rw_ios_per_sec": 0, 00:05:15.098 "rw_mbytes_per_sec": 0, 00:05:15.098 "r_mbytes_per_sec": 0, 00:05:15.098 "w_mbytes_per_sec": 0 00:05:15.098 }, 00:05:15.098 "claimed": true, 00:05:15.098 "claim_type": "exclusive_write", 00:05:15.098 "zoned": false, 00:05:15.098 "supported_io_types": { 00:05:15.098 "read": true, 00:05:15.098 "write": true, 00:05:15.098 "unmap": true, 00:05:15.098 "flush": true, 00:05:15.098 "reset": true, 00:05:15.098 "nvme_admin": false, 00:05:15.098 "nvme_io": false, 00:05:15.098 "nvme_io_md": false, 00:05:15.098 "write_zeroes": true, 00:05:15.098 "zcopy": true, 00:05:15.098 "get_zone_info": false, 00:05:15.098 "zone_management": false, 00:05:15.098 "zone_append": false, 00:05:15.098 "compare": false, 00:05:15.098 "compare_and_write": false, 00:05:15.098 "abort": true, 00:05:15.098 "seek_hole": false, 00:05:15.098 "seek_data": false, 00:05:15.098 "copy": true, 00:05:15.098 "nvme_iov_md": false 00:05:15.098 }, 00:05:15.098 "memory_domains": [ 00:05:15.098 { 00:05:15.098 "dma_device_id": "system", 00:05:15.098 "dma_device_type": 1 00:05:15.098 }, 00:05:15.098 { 00:05:15.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.098 "dma_device_type": 2 00:05:15.098 } 00:05:15.098 ], 00:05:15.098 "driver_specific": {} 00:05:15.098 }, 00:05:15.098 { 00:05:15.098 "name": "Passthru0", 00:05:15.098 "aliases": [ 00:05:15.098 "b7a25edd-287e-5b28-8786-d48d569b6928" 00:05:15.098 ], 00:05:15.098 "product_name": "passthru", 00:05:15.098 "block_size": 512, 00:05:15.098 "num_blocks": 16384, 00:05:15.098 "uuid": "b7a25edd-287e-5b28-8786-d48d569b6928", 00:05:15.098 "assigned_rate_limits": { 00:05:15.098 "rw_ios_per_sec": 0, 00:05:15.098 "rw_mbytes_per_sec": 0, 00:05:15.098 "r_mbytes_per_sec": 0, 00:05:15.098 "w_mbytes_per_sec": 0 00:05:15.098 }, 00:05:15.098 "claimed": false, 00:05:15.098 "zoned": false, 00:05:15.098 "supported_io_types": { 00:05:15.098 "read": true, 00:05:15.098 "write": true, 00:05:15.098 "unmap": true, 00:05:15.098 "flush": true, 00:05:15.098 "reset": true, 00:05:15.098 "nvme_admin": false, 00:05:15.098 "nvme_io": false, 00:05:15.098 "nvme_io_md": false, 00:05:15.098 "write_zeroes": true, 00:05:15.098 "zcopy": true, 00:05:15.098 "get_zone_info": false, 00:05:15.098 "zone_management": false, 00:05:15.098 "zone_append": false, 00:05:15.098 "compare": false, 00:05:15.098 "compare_and_write": false, 00:05:15.098 "abort": true, 00:05:15.098 "seek_hole": false, 00:05:15.098 "seek_data": false, 00:05:15.098 "copy": true, 00:05:15.098 "nvme_iov_md": false 00:05:15.098 }, 00:05:15.098 "memory_domains": [ 00:05:15.098 { 00:05:15.098 "dma_device_id": "system", 00:05:15.098 "dma_device_type": 1 00:05:15.098 }, 00:05:15.098 { 00:05:15.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.098 "dma_device_type": 2 00:05:15.098 } 00:05:15.098 ], 00:05:15.098 "driver_specific": { 00:05:15.098 "passthru": { 00:05:15.098 "name": "Passthru0", 00:05:15.098 "base_bdev_name": "Malloc0" 00:05:15.098 } 00:05:15.098 } 00:05:15.098 } 00:05:15.098 ]' 00:05:15.098 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.098 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.098 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.098 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.098 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.098 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.098 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:15.098 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.098 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.098 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.098 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.098 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.099 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.099 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.099 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.099 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.099 12:22:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.099 00:05:15.099 real 0m0.348s 00:05:15.099 user 0m0.186s 00:05:15.099 sys 0m0.059s 00:05:15.099 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.099 12:22:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.099 ************************************ 00:05:15.099 END TEST rpc_integrity 00:05:15.099 ************************************ 00:05:15.099 12:22:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:15.099 12:22:26 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.099 12:22:26 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.099 12:22:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.099 ************************************ 00:05:15.099 START TEST rpc_plugins 00:05:15.099 ************************************ 00:05:15.099 12:22:26 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:15.099 12:22:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:15.099 12:22:26 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.099 12:22:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.359 12:22:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:15.359 12:22:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.359 12:22:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:15.359 { 00:05:15.359 "name": "Malloc1", 00:05:15.359 "aliases": [ 00:05:15.359 "e56d6719-d4fc-4e31-a488-dfe6b182ce9d" 00:05:15.359 ], 00:05:15.359 "product_name": "Malloc disk", 00:05:15.359 "block_size": 4096, 00:05:15.359 "num_blocks": 256, 00:05:15.359 "uuid": "e56d6719-d4fc-4e31-a488-dfe6b182ce9d", 00:05:15.359 "assigned_rate_limits": { 00:05:15.359 "rw_ios_per_sec": 0, 00:05:15.359 "rw_mbytes_per_sec": 0, 00:05:15.359 "r_mbytes_per_sec": 0, 00:05:15.359 "w_mbytes_per_sec": 0 00:05:15.359 }, 00:05:15.359 "claimed": false, 00:05:15.359 "zoned": false, 00:05:15.359 "supported_io_types": { 00:05:15.359 "read": true, 00:05:15.359 "write": true, 00:05:15.359 "unmap": true, 00:05:15.359 "flush": true, 00:05:15.359 "reset": true, 00:05:15.359 "nvme_admin": false, 00:05:15.359 "nvme_io": false, 00:05:15.359 "nvme_io_md": false, 00:05:15.359 "write_zeroes": true, 00:05:15.359 "zcopy": true, 00:05:15.359 "get_zone_info": false, 00:05:15.359 "zone_management": false, 00:05:15.359 "zone_append": false, 00:05:15.359 "compare": false, 00:05:15.359 "compare_and_write": false, 00:05:15.359 "abort": true, 00:05:15.359 "seek_hole": false, 00:05:15.359 "seek_data": false, 00:05:15.359 "copy": true, 00:05:15.359 "nvme_iov_md": false 00:05:15.359 }, 00:05:15.359 "memory_domains": [ 00:05:15.359 { 00:05:15.359 "dma_device_id": "system", 00:05:15.359 "dma_device_type": 1 00:05:15.359 }, 00:05:15.359 { 00:05:15.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.359 "dma_device_type": 2 00:05:15.359 } 00:05:15.359 ], 00:05:15.359 "driver_specific": {} 00:05:15.359 } 00:05:15.359 ]' 00:05:15.359 12:22:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:15.359 12:22:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:15.359 12:22:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.359 12:22:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.359 12:22:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:15.359 12:22:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:15.359 12:22:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:15.359 00:05:15.359 real 0m0.171s 00:05:15.359 user 0m0.097s 00:05:15.359 sys 0m0.032s 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.359 12:22:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.359 ************************************ 00:05:15.359 END TEST rpc_plugins 00:05:15.359 ************************************ 00:05:15.359 12:22:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:15.359 12:22:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.359 12:22:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.359 12:22:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.359 ************************************ 00:05:15.359 START TEST rpc_trace_cmd_test 00:05:15.359 ************************************ 00:05:15.359 12:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:15.359 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:15.359 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:15.359 12:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.359 12:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.359 12:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.359 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:15.359 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56808", 00:05:15.359 "tpoint_group_mask": "0x8", 00:05:15.359 "iscsi_conn": { 00:05:15.359 "mask": "0x2", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "scsi": { 00:05:15.359 "mask": "0x4", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "bdev": { 00:05:15.359 "mask": "0x8", 00:05:15.359 "tpoint_mask": "0xffffffffffffffff" 00:05:15.359 }, 00:05:15.359 "nvmf_rdma": { 00:05:15.359 "mask": "0x10", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "nvmf_tcp": { 00:05:15.359 "mask": "0x20", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "ftl": { 00:05:15.359 "mask": "0x40", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "blobfs": { 00:05:15.359 "mask": "0x80", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "dsa": { 00:05:15.359 "mask": "0x200", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "thread": { 00:05:15.359 "mask": "0x400", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "nvme_pcie": { 00:05:15.359 "mask": "0x800", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "iaa": { 00:05:15.359 "mask": "0x1000", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "nvme_tcp": { 00:05:15.359 "mask": "0x2000", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "bdev_nvme": { 00:05:15.359 "mask": "0x4000", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "sock": { 00:05:15.359 "mask": "0x8000", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "blob": { 00:05:15.359 "mask": "0x10000", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 }, 00:05:15.359 "bdev_raid": { 00:05:15.359 "mask": "0x20000", 00:05:15.359 "tpoint_mask": "0x0" 00:05:15.359 } 00:05:15.359 }' 00:05:15.359 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:15.618 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:15.618 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:15.618 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:15.618 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:15.618 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:15.618 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:15.618 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:15.618 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:15.618 12:22:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:15.618 00:05:15.618 real 0m0.267s 00:05:15.618 user 0m0.219s 00:05:15.618 sys 0m0.035s 00:05:15.618 12:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.618 12:22:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.618 ************************************ 00:05:15.618 END TEST rpc_trace_cmd_test 00:05:15.618 ************************************ 00:05:15.877 12:22:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:15.877 12:22:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:15.877 12:22:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:15.877 12:22:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.877 12:22:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.877 12:22:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.877 ************************************ 00:05:15.877 START TEST rpc_daemon_integrity 00:05:15.877 ************************************ 00:05:15.877 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:15.877 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.877 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.877 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.877 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.877 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.877 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.877 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.878 { 00:05:15.878 "name": "Malloc2", 00:05:15.878 "aliases": [ 00:05:15.878 "7f67df95-efc7-443a-aef5-ba39301e03cc" 00:05:15.878 ], 00:05:15.878 "product_name": "Malloc disk", 00:05:15.878 "block_size": 512, 00:05:15.878 "num_blocks": 16384, 00:05:15.878 "uuid": "7f67df95-efc7-443a-aef5-ba39301e03cc", 00:05:15.878 "assigned_rate_limits": { 00:05:15.878 "rw_ios_per_sec": 0, 00:05:15.878 "rw_mbytes_per_sec": 0, 00:05:15.878 "r_mbytes_per_sec": 0, 00:05:15.878 "w_mbytes_per_sec": 0 00:05:15.878 }, 00:05:15.878 "claimed": false, 00:05:15.878 "zoned": false, 00:05:15.878 "supported_io_types": { 00:05:15.878 "read": true, 00:05:15.878 "write": true, 00:05:15.878 "unmap": true, 00:05:15.878 "flush": true, 00:05:15.878 "reset": true, 00:05:15.878 "nvme_admin": false, 00:05:15.878 "nvme_io": false, 00:05:15.878 "nvme_io_md": false, 00:05:15.878 "write_zeroes": true, 00:05:15.878 "zcopy": true, 00:05:15.878 "get_zone_info": false, 00:05:15.878 "zone_management": false, 00:05:15.878 "zone_append": false, 00:05:15.878 "compare": false, 00:05:15.878 "compare_and_write": false, 00:05:15.878 "abort": true, 00:05:15.878 "seek_hole": false, 00:05:15.878 "seek_data": false, 00:05:15.878 "copy": true, 00:05:15.878 "nvme_iov_md": false 00:05:15.878 }, 00:05:15.878 "memory_domains": [ 00:05:15.878 { 00:05:15.878 "dma_device_id": "system", 00:05:15.878 "dma_device_type": 1 00:05:15.878 }, 00:05:15.878 { 00:05:15.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.878 "dma_device_type": 2 00:05:15.878 } 00:05:15.878 ], 00:05:15.878 "driver_specific": {} 00:05:15.878 } 00:05:15.878 ]' 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.878 [2024-09-30 12:22:27.705821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:15.878 [2024-09-30 12:22:27.705880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.878 [2024-09-30 12:22:27.705901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:15.878 [2024-09-30 12:22:27.705911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.878 [2024-09-30 12:22:27.707949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.878 [2024-09-30 12:22:27.707991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.878 Passthru0 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.878 { 00:05:15.878 "name": "Malloc2", 00:05:15.878 "aliases": [ 00:05:15.878 "7f67df95-efc7-443a-aef5-ba39301e03cc" 00:05:15.878 ], 00:05:15.878 "product_name": "Malloc disk", 00:05:15.878 "block_size": 512, 00:05:15.878 "num_blocks": 16384, 00:05:15.878 "uuid": "7f67df95-efc7-443a-aef5-ba39301e03cc", 00:05:15.878 "assigned_rate_limits": { 00:05:15.878 "rw_ios_per_sec": 0, 00:05:15.878 "rw_mbytes_per_sec": 0, 00:05:15.878 "r_mbytes_per_sec": 0, 00:05:15.878 "w_mbytes_per_sec": 0 00:05:15.878 }, 00:05:15.878 "claimed": true, 00:05:15.878 "claim_type": "exclusive_write", 00:05:15.878 "zoned": false, 00:05:15.878 "supported_io_types": { 00:05:15.878 "read": true, 00:05:15.878 "write": true, 00:05:15.878 "unmap": true, 00:05:15.878 "flush": true, 00:05:15.878 "reset": true, 00:05:15.878 "nvme_admin": false, 00:05:15.878 "nvme_io": false, 00:05:15.878 "nvme_io_md": false, 00:05:15.878 "write_zeroes": true, 00:05:15.878 "zcopy": true, 00:05:15.878 "get_zone_info": false, 00:05:15.878 "zone_management": false, 00:05:15.878 "zone_append": false, 00:05:15.878 "compare": false, 00:05:15.878 "compare_and_write": false, 00:05:15.878 "abort": true, 00:05:15.878 "seek_hole": false, 00:05:15.878 "seek_data": false, 00:05:15.878 "copy": true, 00:05:15.878 "nvme_iov_md": false 00:05:15.878 }, 00:05:15.878 "memory_domains": [ 00:05:15.878 { 00:05:15.878 "dma_device_id": "system", 00:05:15.878 "dma_device_type": 1 00:05:15.878 }, 00:05:15.878 { 00:05:15.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.878 "dma_device_type": 2 00:05:15.878 } 00:05:15.878 ], 00:05:15.878 "driver_specific": {} 00:05:15.878 }, 00:05:15.878 { 00:05:15.878 "name": "Passthru0", 00:05:15.878 "aliases": [ 00:05:15.878 "2ae50c17-f221-52c2-97c6-2b71ead35008" 00:05:15.878 ], 00:05:15.878 "product_name": "passthru", 00:05:15.878 "block_size": 512, 00:05:15.878 "num_blocks": 16384, 00:05:15.878 "uuid": "2ae50c17-f221-52c2-97c6-2b71ead35008", 00:05:15.878 "assigned_rate_limits": { 00:05:15.878 "rw_ios_per_sec": 0, 00:05:15.878 "rw_mbytes_per_sec": 0, 00:05:15.878 "r_mbytes_per_sec": 0, 00:05:15.878 "w_mbytes_per_sec": 0 00:05:15.878 }, 00:05:15.878 "claimed": false, 00:05:15.878 "zoned": false, 00:05:15.878 "supported_io_types": { 00:05:15.878 "read": true, 00:05:15.878 "write": true, 00:05:15.878 "unmap": true, 00:05:15.878 "flush": true, 00:05:15.878 "reset": true, 00:05:15.878 "nvme_admin": false, 00:05:15.878 "nvme_io": false, 00:05:15.878 "nvme_io_md": false, 00:05:15.878 "write_zeroes": true, 00:05:15.878 "zcopy": true, 00:05:15.878 "get_zone_info": false, 00:05:15.878 "zone_management": false, 00:05:15.878 "zone_append": false, 00:05:15.878 "compare": false, 00:05:15.878 "compare_and_write": false, 00:05:15.878 "abort": true, 00:05:15.878 "seek_hole": false, 00:05:15.878 "seek_data": false, 00:05:15.878 "copy": true, 00:05:15.878 "nvme_iov_md": false 00:05:15.878 }, 00:05:15.878 "memory_domains": [ 00:05:15.878 { 00:05:15.878 "dma_device_id": "system", 00:05:15.878 "dma_device_type": 1 00:05:15.878 }, 00:05:15.878 { 00:05:15.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.878 "dma_device_type": 2 00:05:15.878 } 00:05:15.878 ], 00:05:15.878 "driver_specific": { 00:05:15.878 "passthru": { 00:05:15.878 "name": "Passthru0", 00:05:15.878 "base_bdev_name": "Malloc2" 00:05:15.878 } 00:05:15.878 } 00:05:15.878 } 00:05:15.878 ]' 00:05:15.878 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.138 00:05:16.138 real 0m0.352s 00:05:16.138 user 0m0.203s 00:05:16.138 sys 0m0.050s 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.138 12:22:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.138 ************************************ 00:05:16.138 END TEST rpc_daemon_integrity 00:05:16.138 ************************************ 00:05:16.138 12:22:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:16.138 12:22:27 rpc -- rpc/rpc.sh@84 -- # killprocess 56808 00:05:16.138 12:22:27 rpc -- common/autotest_common.sh@950 -- # '[' -z 56808 ']' 00:05:16.138 12:22:27 rpc -- common/autotest_common.sh@954 -- # kill -0 56808 00:05:16.138 12:22:27 rpc -- common/autotest_common.sh@955 -- # uname 00:05:16.138 12:22:27 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.138 12:22:27 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56808 00:05:16.138 12:22:27 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.138 12:22:27 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.138 killing process with pid 56808 00:05:16.138 12:22:27 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56808' 00:05:16.138 12:22:27 rpc -- common/autotest_common.sh@969 -- # kill 56808 00:05:16.138 12:22:27 rpc -- common/autotest_common.sh@974 -- # wait 56808 00:05:18.673 00:05:18.673 real 0m5.396s 00:05:18.673 user 0m5.914s 00:05:18.673 sys 0m0.949s 00:05:18.673 12:22:30 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.673 12:22:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.673 ************************************ 00:05:18.673 END TEST rpc 00:05:18.673 ************************************ 00:05:18.673 12:22:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:18.673 12:22:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.673 12:22:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.673 12:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:18.673 ************************************ 00:05:18.673 START TEST skip_rpc 00:05:18.673 ************************************ 00:05:18.673 12:22:30 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:18.933 * Looking for test storage... 00:05:18.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.933 12:22:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.933 --rc genhtml_branch_coverage=1 00:05:18.933 --rc genhtml_function_coverage=1 00:05:18.933 --rc genhtml_legend=1 00:05:18.933 --rc geninfo_all_blocks=1 00:05:18.933 --rc geninfo_unexecuted_blocks=1 00:05:18.933 00:05:18.933 ' 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.933 --rc genhtml_branch_coverage=1 00:05:18.933 --rc genhtml_function_coverage=1 00:05:18.933 --rc genhtml_legend=1 00:05:18.933 --rc geninfo_all_blocks=1 00:05:18.933 --rc geninfo_unexecuted_blocks=1 00:05:18.933 00:05:18.933 ' 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.933 --rc genhtml_branch_coverage=1 00:05:18.933 --rc genhtml_function_coverage=1 00:05:18.933 --rc genhtml_legend=1 00:05:18.933 --rc geninfo_all_blocks=1 00:05:18.933 --rc geninfo_unexecuted_blocks=1 00:05:18.933 00:05:18.933 ' 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.933 --rc genhtml_branch_coverage=1 00:05:18.933 --rc genhtml_function_coverage=1 00:05:18.933 --rc genhtml_legend=1 00:05:18.933 --rc geninfo_all_blocks=1 00:05:18.933 --rc geninfo_unexecuted_blocks=1 00:05:18.933 00:05:18.933 ' 00:05:18.933 12:22:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:18.933 12:22:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:18.933 12:22:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.933 12:22:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.933 ************************************ 00:05:18.933 START TEST skip_rpc 00:05:18.933 ************************************ 00:05:18.933 12:22:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:18.933 12:22:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57048 00:05:18.933 12:22:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:18.933 12:22:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.933 12:22:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:18.933 [2024-09-30 12:22:30.824659] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:18.933 [2024-09-30 12:22:30.824791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57048 ] 00:05:19.193 [2024-09-30 12:22:30.988719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.453 [2024-09-30 12:22:31.202952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57048 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57048 ']' 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57048 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57048 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.730 killing process with pid 57048 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57048' 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57048 00:05:24.730 12:22:35 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57048 00:05:26.649 00:05:26.649 real 0m7.471s 00:05:26.649 user 0m6.995s 00:05:26.649 sys 0m0.394s 00:05:26.649 12:22:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.649 ************************************ 00:05:26.649 END TEST skip_rpc 00:05:26.649 ************************************ 00:05:26.649 12:22:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.649 12:22:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:26.649 12:22:38 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.649 12:22:38 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.649 12:22:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.649 ************************************ 00:05:26.649 START TEST skip_rpc_with_json 00:05:26.649 ************************************ 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57152 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57152 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57152 ']' 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.649 12:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.649 [2024-09-30 12:22:38.361793] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:26.649 [2024-09-30 12:22:38.361940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57152 ] 00:05:26.649 [2024-09-30 12:22:38.525567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.907 [2024-09-30 12:22:38.714885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.843 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.843 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:27.843 12:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:27.843 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.843 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.843 [2024-09-30 12:22:39.508127] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:27.843 request: 00:05:27.843 { 00:05:27.843 "trtype": "tcp", 00:05:27.843 "method": "nvmf_get_transports", 00:05:27.843 "req_id": 1 00:05:27.843 } 00:05:27.843 Got JSON-RPC error response 00:05:27.843 response: 00:05:27.844 { 00:05:27.844 "code": -19, 00:05:27.844 "message": "No such device" 00:05:27.844 } 00:05:27.844 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:27.844 12:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:27.844 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.844 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.844 [2024-09-30 12:22:39.520202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.844 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.844 12:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:27.844 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.844 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.844 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.844 12:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.844 { 00:05:27.844 "subsystems": [ 00:05:27.844 { 00:05:27.844 "subsystem": "fsdev", 00:05:27.844 "config": [ 00:05:27.844 { 00:05:27.844 "method": "fsdev_set_opts", 00:05:27.844 "params": { 00:05:27.844 "fsdev_io_pool_size": 65535, 00:05:27.844 "fsdev_io_cache_size": 256 00:05:27.844 } 00:05:27.844 } 00:05:27.844 ] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "keyring", 00:05:27.844 "config": [] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "iobuf", 00:05:27.844 "config": [ 00:05:27.844 { 00:05:27.844 "method": "iobuf_set_options", 00:05:27.844 "params": { 00:05:27.844 "small_pool_count": 8192, 00:05:27.844 "large_pool_count": 1024, 00:05:27.844 "small_bufsize": 8192, 00:05:27.844 "large_bufsize": 135168 00:05:27.844 } 00:05:27.844 } 00:05:27.844 ] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "sock", 00:05:27.844 "config": [ 00:05:27.844 { 00:05:27.844 "method": "sock_set_default_impl", 00:05:27.844 "params": { 00:05:27.844 "impl_name": "posix" 00:05:27.844 } 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "method": "sock_impl_set_options", 00:05:27.844 "params": { 00:05:27.844 "impl_name": "ssl", 00:05:27.844 "recv_buf_size": 4096, 00:05:27.844 "send_buf_size": 4096, 00:05:27.844 "enable_recv_pipe": true, 00:05:27.844 "enable_quickack": false, 00:05:27.844 "enable_placement_id": 0, 00:05:27.844 "enable_zerocopy_send_server": true, 00:05:27.844 "enable_zerocopy_send_client": false, 00:05:27.844 "zerocopy_threshold": 0, 00:05:27.844 "tls_version": 0, 00:05:27.844 "enable_ktls": false 00:05:27.844 } 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "method": "sock_impl_set_options", 00:05:27.844 "params": { 00:05:27.844 "impl_name": "posix", 00:05:27.844 "recv_buf_size": 2097152, 00:05:27.844 "send_buf_size": 2097152, 00:05:27.844 "enable_recv_pipe": true, 00:05:27.844 "enable_quickack": false, 00:05:27.844 "enable_placement_id": 0, 00:05:27.844 "enable_zerocopy_send_server": true, 00:05:27.844 "enable_zerocopy_send_client": false, 00:05:27.844 "zerocopy_threshold": 0, 00:05:27.844 "tls_version": 0, 00:05:27.844 "enable_ktls": false 00:05:27.844 } 00:05:27.844 } 00:05:27.844 ] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "vmd", 00:05:27.844 "config": [] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "accel", 00:05:27.844 "config": [ 00:05:27.844 { 00:05:27.844 "method": "accel_set_options", 00:05:27.844 "params": { 00:05:27.844 "small_cache_size": 128, 00:05:27.844 "large_cache_size": 16, 00:05:27.844 "task_count": 2048, 00:05:27.844 "sequence_count": 2048, 00:05:27.844 "buf_count": 2048 00:05:27.844 } 00:05:27.844 } 00:05:27.844 ] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "bdev", 00:05:27.844 "config": [ 00:05:27.844 { 00:05:27.844 "method": "bdev_set_options", 00:05:27.844 "params": { 00:05:27.844 "bdev_io_pool_size": 65535, 00:05:27.844 "bdev_io_cache_size": 256, 00:05:27.844 "bdev_auto_examine": true, 00:05:27.844 "iobuf_small_cache_size": 128, 00:05:27.844 "iobuf_large_cache_size": 16 00:05:27.844 } 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "method": "bdev_raid_set_options", 00:05:27.844 "params": { 00:05:27.844 "process_window_size_kb": 1024, 00:05:27.844 "process_max_bandwidth_mb_sec": 0 00:05:27.844 } 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "method": "bdev_iscsi_set_options", 00:05:27.844 "params": { 00:05:27.844 "timeout_sec": 30 00:05:27.844 } 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "method": "bdev_nvme_set_options", 00:05:27.844 "params": { 00:05:27.844 "action_on_timeout": "none", 00:05:27.844 "timeout_us": 0, 00:05:27.844 "timeout_admin_us": 0, 00:05:27.844 "keep_alive_timeout_ms": 10000, 00:05:27.844 "arbitration_burst": 0, 00:05:27.844 "low_priority_weight": 0, 00:05:27.844 "medium_priority_weight": 0, 00:05:27.844 "high_priority_weight": 0, 00:05:27.844 "nvme_adminq_poll_period_us": 10000, 00:05:27.844 "nvme_ioq_poll_period_us": 0, 00:05:27.844 "io_queue_requests": 0, 00:05:27.844 "delay_cmd_submit": true, 00:05:27.844 "transport_retry_count": 4, 00:05:27.844 "bdev_retry_count": 3, 00:05:27.844 "transport_ack_timeout": 0, 00:05:27.844 "ctrlr_loss_timeout_sec": 0, 00:05:27.844 "reconnect_delay_sec": 0, 00:05:27.844 "fast_io_fail_timeout_sec": 0, 00:05:27.844 "disable_auto_failback": false, 00:05:27.844 "generate_uuids": false, 00:05:27.844 "transport_tos": 0, 00:05:27.844 "nvme_error_stat": false, 00:05:27.844 "rdma_srq_size": 0, 00:05:27.844 "io_path_stat": false, 00:05:27.844 "allow_accel_sequence": false, 00:05:27.844 "rdma_max_cq_size": 0, 00:05:27.844 "rdma_cm_event_timeout_ms": 0, 00:05:27.844 "dhchap_digests": [ 00:05:27.844 "sha256", 00:05:27.844 "sha384", 00:05:27.844 "sha512" 00:05:27.844 ], 00:05:27.844 "dhchap_dhgroups": [ 00:05:27.844 "null", 00:05:27.844 "ffdhe2048", 00:05:27.844 "ffdhe3072", 00:05:27.844 "ffdhe4096", 00:05:27.844 "ffdhe6144", 00:05:27.844 "ffdhe8192" 00:05:27.844 ] 00:05:27.844 } 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "method": "bdev_nvme_set_hotplug", 00:05:27.844 "params": { 00:05:27.844 "period_us": 100000, 00:05:27.844 "enable": false 00:05:27.844 } 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "method": "bdev_wait_for_examine" 00:05:27.844 } 00:05:27.844 ] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "scsi", 00:05:27.844 "config": null 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "scheduler", 00:05:27.844 "config": [ 00:05:27.844 { 00:05:27.844 "method": "framework_set_scheduler", 00:05:27.844 "params": { 00:05:27.844 "name": "static" 00:05:27.844 } 00:05:27.844 } 00:05:27.844 ] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "vhost_scsi", 00:05:27.844 "config": [] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "vhost_blk", 00:05:27.844 "config": [] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "ublk", 00:05:27.844 "config": [] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "nbd", 00:05:27.844 "config": [] 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "subsystem": "nvmf", 00:05:27.844 "config": [ 00:05:27.844 { 00:05:27.844 "method": "nvmf_set_config", 00:05:27.844 "params": { 00:05:27.844 "discovery_filter": "match_any", 00:05:27.844 "admin_cmd_passthru": { 00:05:27.844 "identify_ctrlr": false 00:05:27.844 }, 00:05:27.844 "dhchap_digests": [ 00:05:27.844 "sha256", 00:05:27.844 "sha384", 00:05:27.844 "sha512" 00:05:27.844 ], 00:05:27.844 "dhchap_dhgroups": [ 00:05:27.844 "null", 00:05:27.844 "ffdhe2048", 00:05:27.844 "ffdhe3072", 00:05:27.844 "ffdhe4096", 00:05:27.844 "ffdhe6144", 00:05:27.844 "ffdhe8192" 00:05:27.844 ] 00:05:27.844 } 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "method": "nvmf_set_max_subsystems", 00:05:27.844 "params": { 00:05:27.844 "max_subsystems": 1024 00:05:27.844 } 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "method": "nvmf_set_crdt", 00:05:27.844 "params": { 00:05:27.844 "crdt1": 0, 00:05:27.844 "crdt2": 0, 00:05:27.844 "crdt3": 0 00:05:27.844 } 00:05:27.844 }, 00:05:27.844 { 00:05:27.844 "method": "nvmf_create_transport", 00:05:27.844 "params": { 00:05:27.844 "trtype": "TCP", 00:05:27.844 "max_queue_depth": 128, 00:05:27.844 "max_io_qpairs_per_ctrlr": 127, 00:05:27.844 "in_capsule_data_size": 4096, 00:05:27.844 "max_io_size": 131072, 00:05:27.844 "io_unit_size": 131072, 00:05:27.844 "max_aq_depth": 128, 00:05:27.844 "num_shared_buffers": 511, 00:05:27.844 "buf_cache_size": 4294967295, 00:05:27.844 "dif_insert_or_strip": false, 00:05:27.844 "zcopy": false, 00:05:27.844 "c2h_success": true, 00:05:27.844 "sock_priority": 0, 00:05:27.844 "abort_timeout_sec": 1, 00:05:27.844 "ack_timeout": 0, 00:05:27.845 "data_wr_pool_size": 0 00:05:27.845 } 00:05:27.845 } 00:05:27.845 ] 00:05:27.845 }, 00:05:27.845 { 00:05:27.845 "subsystem": "iscsi", 00:05:27.845 "config": [ 00:05:27.845 { 00:05:27.845 "method": "iscsi_set_options", 00:05:27.845 "params": { 00:05:27.845 "node_base": "iqn.2016-06.io.spdk", 00:05:27.845 "max_sessions": 128, 00:05:27.845 "max_connections_per_session": 2, 00:05:27.845 "max_queue_depth": 64, 00:05:27.845 "default_time2wait": 2, 00:05:27.845 "default_time2retain": 20, 00:05:27.845 "first_burst_length": 8192, 00:05:27.845 "immediate_data": true, 00:05:27.845 "allow_duplicated_isid": false, 00:05:27.845 "error_recovery_level": 0, 00:05:27.845 "nop_timeout": 60, 00:05:27.845 "nop_in_interval": 30, 00:05:27.845 "disable_chap": false, 00:05:27.845 "require_chap": false, 00:05:27.845 "mutual_chap": false, 00:05:27.845 "chap_group": 0, 00:05:27.845 "max_large_datain_per_connection": 64, 00:05:27.845 "max_r2t_per_connection": 4, 00:05:27.845 "pdu_pool_size": 36864, 00:05:27.845 "immediate_data_pool_size": 16384, 00:05:27.845 "data_out_pool_size": 2048 00:05:27.845 } 00:05:27.845 } 00:05:27.845 ] 00:05:27.845 } 00:05:27.845 ] 00:05:27.845 } 00:05:27.845 12:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:27.845 12:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57152 00:05:27.845 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57152 ']' 00:05:27.845 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57152 00:05:27.845 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:27.845 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.845 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57152 00:05:28.104 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.104 killing process with pid 57152 00:05:28.104 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.104 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57152' 00:05:28.104 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57152 00:05:28.104 12:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57152 00:05:30.639 12:22:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57208 00:05:30.639 12:22:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:30.639 12:22:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:35.917 12:22:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57208 00:05:35.917 12:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57208 ']' 00:05:35.917 12:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57208 00:05:35.917 12:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:35.917 12:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.917 12:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57208 00:05:35.917 12:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.917 12:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.917 killing process with pid 57208 00:05:35.917 12:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57208' 00:05:35.917 12:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57208 00:05:35.917 12:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57208 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:37.862 00:05:37.862 real 0m11.292s 00:05:37.862 user 0m10.733s 00:05:37.862 sys 0m0.826s 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.862 ************************************ 00:05:37.862 END TEST skip_rpc_with_json 00:05:37.862 ************************************ 00:05:37.862 12:22:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:37.862 12:22:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.862 12:22:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.862 12:22:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.862 ************************************ 00:05:37.862 START TEST skip_rpc_with_delay 00:05:37.862 ************************************ 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.862 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.863 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.863 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:37.863 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:37.863 [2024-09-30 12:22:49.725392] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:37.863 [2024-09-30 12:22:49.725532] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:38.122 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:38.122 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.122 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.122 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.122 00:05:38.122 real 0m0.170s 00:05:38.122 user 0m0.094s 00:05:38.122 sys 0m0.075s 00:05:38.122 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.122 12:22:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:38.122 ************************************ 00:05:38.122 END TEST skip_rpc_with_delay 00:05:38.122 ************************************ 00:05:38.122 12:22:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:38.122 12:22:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:38.122 12:22:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:38.122 12:22:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.122 12:22:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.122 12:22:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.122 ************************************ 00:05:38.122 START TEST exit_on_failed_rpc_init 00:05:38.122 ************************************ 00:05:38.122 12:22:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:38.122 12:22:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57336 00:05:38.122 12:22:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.122 12:22:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57336 00:05:38.122 12:22:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57336 ']' 00:05:38.122 12:22:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.122 12:22:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.122 12:22:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.122 12:22:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.122 12:22:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.122 [2024-09-30 12:22:49.961956] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:38.123 [2024-09-30 12:22:49.962110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57336 ] 00:05:38.382 [2024-09-30 12:22:50.122845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.642 [2024-09-30 12:22:50.329188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.579 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.579 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:39.579 12:22:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.579 12:22:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.579 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:39.579 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.580 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.580 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.580 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.580 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.580 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.580 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.580 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.580 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:39.580 12:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.580 [2024-09-30 12:22:51.274002] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:39.580 [2024-09-30 12:22:51.274132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57354 ] 00:05:39.580 [2024-09-30 12:22:51.435220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.839 [2024-09-30 12:22:51.636958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.839 [2024-09-30 12:22:51.637071] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:39.839 [2024-09-30 12:22:51.637084] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:39.839 [2024-09-30 12:22:51.637095] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57336 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57336 ']' 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57336 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57336 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.407 killing process with pid 57336 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57336' 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57336 00:05:40.407 12:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57336 00:05:42.942 00:05:42.942 real 0m4.581s 00:05:42.942 user 0m5.092s 00:05:42.942 sys 0m0.567s 00:05:42.942 12:22:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.942 12:22:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:42.942 ************************************ 00:05:42.942 END TEST exit_on_failed_rpc_init 00:05:42.942 ************************************ 00:05:42.942 12:22:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.942 00:05:42.942 real 0m24.024s 00:05:42.942 user 0m23.113s 00:05:42.942 sys 0m2.183s 00:05:42.942 12:22:54 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.942 12:22:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.942 ************************************ 00:05:42.942 END TEST skip_rpc 00:05:42.942 ************************************ 00:05:42.942 12:22:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:42.942 12:22:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.942 12:22:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.942 12:22:54 -- common/autotest_common.sh@10 -- # set +x 00:05:42.942 ************************************ 00:05:42.942 START TEST rpc_client 00:05:42.942 ************************************ 00:05:42.942 12:22:54 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:42.942 * Looking for test storage... 00:05:42.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:42.942 12:22:54 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.942 12:22:54 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.942 12:22:54 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:42.942 12:22:54 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.942 12:22:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:42.942 12:22:54 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.942 12:22:54 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:42.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.942 --rc genhtml_branch_coverage=1 00:05:42.942 --rc genhtml_function_coverage=1 00:05:42.942 --rc genhtml_legend=1 00:05:42.942 --rc geninfo_all_blocks=1 00:05:42.942 --rc geninfo_unexecuted_blocks=1 00:05:42.942 00:05:42.942 ' 00:05:42.942 12:22:54 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:42.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.942 --rc genhtml_branch_coverage=1 00:05:42.942 --rc genhtml_function_coverage=1 00:05:42.942 --rc genhtml_legend=1 00:05:42.942 --rc geninfo_all_blocks=1 00:05:42.942 --rc geninfo_unexecuted_blocks=1 00:05:42.943 00:05:42.943 ' 00:05:42.943 12:22:54 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:42.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.943 --rc genhtml_branch_coverage=1 00:05:42.943 --rc genhtml_function_coverage=1 00:05:42.943 --rc genhtml_legend=1 00:05:42.943 --rc geninfo_all_blocks=1 00:05:42.943 --rc geninfo_unexecuted_blocks=1 00:05:42.943 00:05:42.943 ' 00:05:42.943 12:22:54 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:42.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.943 --rc genhtml_branch_coverage=1 00:05:42.943 --rc genhtml_function_coverage=1 00:05:42.943 --rc genhtml_legend=1 00:05:42.943 --rc geninfo_all_blocks=1 00:05:42.943 --rc geninfo_unexecuted_blocks=1 00:05:42.943 00:05:42.943 ' 00:05:42.943 12:22:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:42.943 OK 00:05:43.203 12:22:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:43.203 00:05:43.203 real 0m0.279s 00:05:43.203 user 0m0.152s 00:05:43.203 sys 0m0.143s 00:05:43.203 12:22:54 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.203 12:22:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:43.203 ************************************ 00:05:43.203 END TEST rpc_client 00:05:43.203 ************************************ 00:05:43.203 12:22:54 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:43.203 12:22:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.203 12:22:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.203 12:22:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.203 ************************************ 00:05:43.203 START TEST json_config 00:05:43.203 ************************************ 00:05:43.203 12:22:54 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:43.203 12:22:55 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:43.203 12:22:55 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:43.203 12:22:55 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:43.203 12:22:55 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:43.203 12:22:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.203 12:22:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.203 12:22:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.203 12:22:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.203 12:22:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.203 12:22:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.203 12:22:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.203 12:22:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.203 12:22:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.203 12:22:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.203 12:22:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.203 12:22:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:43.203 12:22:55 json_config -- scripts/common.sh@345 -- # : 1 00:05:43.203 12:22:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.203 12:22:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.203 12:22:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:43.203 12:22:55 json_config -- scripts/common.sh@353 -- # local d=1 00:05:43.203 12:22:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.203 12:22:55 json_config -- scripts/common.sh@355 -- # echo 1 00:05:43.203 12:22:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.203 12:22:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:43.203 12:22:55 json_config -- scripts/common.sh@353 -- # local d=2 00:05:43.203 12:22:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.203 12:22:55 json_config -- scripts/common.sh@355 -- # echo 2 00:05:43.203 12:22:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.203 12:22:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.203 12:22:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.203 12:22:55 json_config -- scripts/common.sh@368 -- # return 0 00:05:43.203 12:22:55 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.203 12:22:55 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:43.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.203 --rc genhtml_branch_coverage=1 00:05:43.203 --rc genhtml_function_coverage=1 00:05:43.203 --rc genhtml_legend=1 00:05:43.203 --rc geninfo_all_blocks=1 00:05:43.203 --rc geninfo_unexecuted_blocks=1 00:05:43.203 00:05:43.203 ' 00:05:43.203 12:22:55 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:43.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.203 --rc genhtml_branch_coverage=1 00:05:43.203 --rc genhtml_function_coverage=1 00:05:43.203 --rc genhtml_legend=1 00:05:43.203 --rc geninfo_all_blocks=1 00:05:43.203 --rc geninfo_unexecuted_blocks=1 00:05:43.203 00:05:43.203 ' 00:05:43.203 12:22:55 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:43.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.203 --rc genhtml_branch_coverage=1 00:05:43.203 --rc genhtml_function_coverage=1 00:05:43.203 --rc genhtml_legend=1 00:05:43.203 --rc geninfo_all_blocks=1 00:05:43.203 --rc geninfo_unexecuted_blocks=1 00:05:43.203 00:05:43.203 ' 00:05:43.203 12:22:55 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:43.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.203 --rc genhtml_branch_coverage=1 00:05:43.203 --rc genhtml_function_coverage=1 00:05:43.203 --rc genhtml_legend=1 00:05:43.203 --rc geninfo_all_blocks=1 00:05:43.203 --rc geninfo_unexecuted_blocks=1 00:05:43.203 00:05:43.203 ' 00:05:43.203 12:22:55 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.203 12:22:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:43.462 12:22:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.462 12:22:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.462 12:22:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.462 12:22:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.462 12:22:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.462 12:22:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.462 12:22:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.462 12:22:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.462 12:22:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.462 12:22:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:47745ecc-8228-4d31-b22b-27f2eabba6fc 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=47745ecc-8228-4d31-b22b-27f2eabba6fc 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.463 12:22:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.463 12:22:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.463 12:22:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.463 12:22:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.463 12:22:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.463 12:22:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.463 12:22:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.463 12:22:55 json_config -- paths/export.sh@5 -- # export PATH 00:05:43.463 12:22:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@51 -- # : 0 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.463 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.463 12:22:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.463 12:22:55 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:43.463 12:22:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:43.463 12:22:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:43.463 12:22:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:43.463 12:22:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:43.463 WARNING: No tests are enabled so not running JSON configuration tests 00:05:43.463 12:22:55 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:43.463 12:22:55 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:43.463 00:05:43.463 real 0m0.213s 00:05:43.463 user 0m0.129s 00:05:43.463 sys 0m0.092s 00:05:43.463 12:22:55 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.463 12:22:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.463 ************************************ 00:05:43.463 END TEST json_config 00:05:43.463 ************************************ 00:05:43.463 12:22:55 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:43.463 12:22:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.463 12:22:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.463 12:22:55 -- common/autotest_common.sh@10 -- # set +x 00:05:43.463 ************************************ 00:05:43.463 START TEST json_config_extra_key 00:05:43.463 ************************************ 00:05:43.463 12:22:55 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:43.463 12:22:55 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:43.463 12:22:55 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:43.463 12:22:55 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:43.463 12:22:55 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:43.463 12:22:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.463 12:22:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:43.723 12:22:55 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.723 12:22:55 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:43.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.723 --rc genhtml_branch_coverage=1 00:05:43.723 --rc genhtml_function_coverage=1 00:05:43.723 --rc genhtml_legend=1 00:05:43.723 --rc geninfo_all_blocks=1 00:05:43.723 --rc geninfo_unexecuted_blocks=1 00:05:43.723 00:05:43.723 ' 00:05:43.723 12:22:55 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:43.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.723 --rc genhtml_branch_coverage=1 00:05:43.723 --rc genhtml_function_coverage=1 00:05:43.723 --rc genhtml_legend=1 00:05:43.723 --rc geninfo_all_blocks=1 00:05:43.723 --rc geninfo_unexecuted_blocks=1 00:05:43.723 00:05:43.723 ' 00:05:43.723 12:22:55 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:43.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.723 --rc genhtml_branch_coverage=1 00:05:43.723 --rc genhtml_function_coverage=1 00:05:43.723 --rc genhtml_legend=1 00:05:43.723 --rc geninfo_all_blocks=1 00:05:43.723 --rc geninfo_unexecuted_blocks=1 00:05:43.723 00:05:43.723 ' 00:05:43.723 12:22:55 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:43.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.723 --rc genhtml_branch_coverage=1 00:05:43.723 --rc genhtml_function_coverage=1 00:05:43.723 --rc genhtml_legend=1 00:05:43.723 --rc geninfo_all_blocks=1 00:05:43.723 --rc geninfo_unexecuted_blocks=1 00:05:43.723 00:05:43.723 ' 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:47745ecc-8228-4d31-b22b-27f2eabba6fc 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=47745ecc-8228-4d31-b22b-27f2eabba6fc 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.723 12:22:55 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.723 12:22:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.723 12:22:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.723 12:22:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.723 12:22:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:43.723 12:22:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.723 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.723 12:22:55 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.723 INFO: launching applications... 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:43.723 12:22:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:43.724 12:22:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:43.724 12:22:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:43.724 12:22:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.724 12:22:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.724 12:22:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.724 12:22:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.724 12:22:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.724 12:22:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57569 00:05:43.724 Waiting for target to run... 00:05:43.724 12:22:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.724 12:22:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57569 /var/tmp/spdk_tgt.sock 00:05:43.724 12:22:55 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57569 ']' 00:05:43.724 12:22:55 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.724 12:22:55 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.724 12:22:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:43.724 12:22:55 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.724 12:22:55 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.724 12:22:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.724 [2024-09-30 12:22:55.512093] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:43.724 [2024-09-30 12:22:55.512224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57569 ] 00:05:44.292 [2024-09-30 12:22:55.883652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.292 [2024-09-30 12:22:56.059931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.863 12:22:56 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.863 12:22:56 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:44.863 00:05:44.863 12:22:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:44.863 INFO: shutting down applications... 00:05:44.863 12:22:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:44.863 12:22:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:44.863 12:22:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:44.863 12:22:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.863 12:22:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57569 ]] 00:05:44.863 12:22:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57569 00:05:44.863 12:22:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.863 12:22:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.863 12:22:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57569 00:05:44.863 12:22:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.439 12:22:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.439 12:22:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.439 12:22:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57569 00:05:45.439 12:22:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.009 12:22:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.009 12:22:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.009 12:22:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57569 00:05:46.009 12:22:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.578 12:22:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.578 12:22:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.578 12:22:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57569 00:05:46.578 12:22:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.146 12:22:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.146 12:22:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.146 12:22:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57569 00:05:47.146 12:22:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.405 12:22:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.405 12:22:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.405 12:22:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57569 00:05:47.405 12:22:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.973 12:22:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.973 12:22:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.973 12:22:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57569 00:05:47.973 12:22:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:47.973 12:22:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:47.973 12:22:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:47.973 SPDK target shutdown done 00:05:47.973 12:22:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:47.973 Success 00:05:47.973 12:22:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:47.973 00:05:47.973 real 0m4.586s 00:05:47.973 user 0m4.024s 00:05:47.973 sys 0m0.558s 00:05:47.973 12:22:59 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.973 12:22:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:47.973 ************************************ 00:05:47.973 END TEST json_config_extra_key 00:05:47.973 ************************************ 00:05:47.973 12:22:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.973 12:22:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.973 12:22:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.973 12:22:59 -- common/autotest_common.sh@10 -- # set +x 00:05:47.973 ************************************ 00:05:47.973 START TEST alias_rpc 00:05:47.973 ************************************ 00:05:47.973 12:22:59 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.232 * Looking for test storage... 00:05:48.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:48.232 12:22:59 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:48.232 12:22:59 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:48.232 12:22:59 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:48.232 12:23:00 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.232 12:23:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:48.232 12:23:00 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.232 12:23:00 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:48.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.232 --rc genhtml_branch_coverage=1 00:05:48.232 --rc genhtml_function_coverage=1 00:05:48.232 --rc genhtml_legend=1 00:05:48.232 --rc geninfo_all_blocks=1 00:05:48.232 --rc geninfo_unexecuted_blocks=1 00:05:48.232 00:05:48.232 ' 00:05:48.232 12:23:00 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:48.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.232 --rc genhtml_branch_coverage=1 00:05:48.232 --rc genhtml_function_coverage=1 00:05:48.232 --rc genhtml_legend=1 00:05:48.232 --rc geninfo_all_blocks=1 00:05:48.232 --rc geninfo_unexecuted_blocks=1 00:05:48.232 00:05:48.232 ' 00:05:48.232 12:23:00 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:48.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.232 --rc genhtml_branch_coverage=1 00:05:48.232 --rc genhtml_function_coverage=1 00:05:48.232 --rc genhtml_legend=1 00:05:48.232 --rc geninfo_all_blocks=1 00:05:48.232 --rc geninfo_unexecuted_blocks=1 00:05:48.232 00:05:48.232 ' 00:05:48.232 12:23:00 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:48.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.232 --rc genhtml_branch_coverage=1 00:05:48.232 --rc genhtml_function_coverage=1 00:05:48.232 --rc genhtml_legend=1 00:05:48.233 --rc geninfo_all_blocks=1 00:05:48.233 --rc geninfo_unexecuted_blocks=1 00:05:48.233 00:05:48.233 ' 00:05:48.233 12:23:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.233 12:23:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57681 00:05:48.233 12:23:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.233 12:23:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57681 00:05:48.233 12:23:00 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57681 ']' 00:05:48.233 12:23:00 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.233 12:23:00 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.233 12:23:00 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.233 12:23:00 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.233 12:23:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.491 [2024-09-30 12:23:00.166577] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:48.491 [2024-09-30 12:23:00.166704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57681 ] 00:05:48.491 [2024-09-30 12:23:00.326938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.750 [2024-09-30 12:23:00.527323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.687 12:23:01 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.687 12:23:01 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:49.687 12:23:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:49.946 12:23:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57681 00:05:49.946 12:23:01 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57681 ']' 00:05:49.946 12:23:01 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57681 00:05:49.946 12:23:01 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:49.946 12:23:01 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.946 12:23:01 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57681 00:05:49.946 12:23:01 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.946 12:23:01 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.947 12:23:01 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57681' 00:05:49.947 killing process with pid 57681 00:05:49.947 12:23:01 alias_rpc -- common/autotest_common.sh@969 -- # kill 57681 00:05:49.947 12:23:01 alias_rpc -- common/autotest_common.sh@974 -- # wait 57681 00:05:52.485 00:05:52.485 real 0m4.236s 00:05:52.485 user 0m4.199s 00:05:52.485 sys 0m0.564s 00:05:52.485 12:23:04 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.485 12:23:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.485 ************************************ 00:05:52.485 END TEST alias_rpc 00:05:52.485 ************************************ 00:05:52.485 12:23:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:52.485 12:23:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:52.485 12:23:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.485 12:23:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.485 12:23:04 -- common/autotest_common.sh@10 -- # set +x 00:05:52.485 ************************************ 00:05:52.485 START TEST spdkcli_tcp 00:05:52.485 ************************************ 00:05:52.485 12:23:04 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:52.485 * Looking for test storage... 00:05:52.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:52.485 12:23:04 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.485 12:23:04 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.485 12:23:04 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.485 12:23:04 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.485 12:23:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.485 12:23:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.485 12:23:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.485 12:23:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.485 12:23:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.485 12:23:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.485 12:23:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.485 12:23:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.486 12:23:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.486 --rc genhtml_branch_coverage=1 00:05:52.486 --rc genhtml_function_coverage=1 00:05:52.486 --rc genhtml_legend=1 00:05:52.486 --rc geninfo_all_blocks=1 00:05:52.486 --rc geninfo_unexecuted_blocks=1 00:05:52.486 00:05:52.486 ' 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.486 --rc genhtml_branch_coverage=1 00:05:52.486 --rc genhtml_function_coverage=1 00:05:52.486 --rc genhtml_legend=1 00:05:52.486 --rc geninfo_all_blocks=1 00:05:52.486 --rc geninfo_unexecuted_blocks=1 00:05:52.486 00:05:52.486 ' 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.486 --rc genhtml_branch_coverage=1 00:05:52.486 --rc genhtml_function_coverage=1 00:05:52.486 --rc genhtml_legend=1 00:05:52.486 --rc geninfo_all_blocks=1 00:05:52.486 --rc geninfo_unexecuted_blocks=1 00:05:52.486 00:05:52.486 ' 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.486 --rc genhtml_branch_coverage=1 00:05:52.486 --rc genhtml_function_coverage=1 00:05:52.486 --rc genhtml_legend=1 00:05:52.486 --rc geninfo_all_blocks=1 00:05:52.486 --rc geninfo_unexecuted_blocks=1 00:05:52.486 00:05:52.486 ' 00:05:52.486 12:23:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:52.486 12:23:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:52.486 12:23:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:52.486 12:23:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:52.486 12:23:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:52.486 12:23:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:52.486 12:23:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.486 12:23:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57788 00:05:52.486 12:23:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:52.486 12:23:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57788 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57788 ']' 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.486 12:23:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.744 [2024-09-30 12:23:04.465519] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:52.744 [2024-09-30 12:23:04.465633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57788 ] 00:05:52.744 [2024-09-30 12:23:04.630187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.003 [2024-09-30 12:23:04.825375] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.003 [2024-09-30 12:23:04.825412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.940 12:23:05 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.940 12:23:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:53.940 12:23:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57805 00:05:53.940 12:23:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:53.940 12:23:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:53.940 [ 00:05:53.940 "bdev_malloc_delete", 00:05:53.940 "bdev_malloc_create", 00:05:53.940 "bdev_null_resize", 00:05:53.940 "bdev_null_delete", 00:05:53.940 "bdev_null_create", 00:05:53.940 "bdev_nvme_cuse_unregister", 00:05:53.940 "bdev_nvme_cuse_register", 00:05:53.940 "bdev_opal_new_user", 00:05:53.940 "bdev_opal_set_lock_state", 00:05:53.940 "bdev_opal_delete", 00:05:53.940 "bdev_opal_get_info", 00:05:53.940 "bdev_opal_create", 00:05:53.940 "bdev_nvme_opal_revert", 00:05:53.940 "bdev_nvme_opal_init", 00:05:53.940 "bdev_nvme_send_cmd", 00:05:53.940 "bdev_nvme_set_keys", 00:05:53.940 "bdev_nvme_get_path_iostat", 00:05:53.940 "bdev_nvme_get_mdns_discovery_info", 00:05:53.940 "bdev_nvme_stop_mdns_discovery", 00:05:53.940 "bdev_nvme_start_mdns_discovery", 00:05:53.940 "bdev_nvme_set_multipath_policy", 00:05:53.940 "bdev_nvme_set_preferred_path", 00:05:53.940 "bdev_nvme_get_io_paths", 00:05:53.940 "bdev_nvme_remove_error_injection", 00:05:53.940 "bdev_nvme_add_error_injection", 00:05:53.940 "bdev_nvme_get_discovery_info", 00:05:53.940 "bdev_nvme_stop_discovery", 00:05:53.940 "bdev_nvme_start_discovery", 00:05:53.940 "bdev_nvme_get_controller_health_info", 00:05:53.940 "bdev_nvme_disable_controller", 00:05:53.940 "bdev_nvme_enable_controller", 00:05:53.940 "bdev_nvme_reset_controller", 00:05:53.940 "bdev_nvme_get_transport_statistics", 00:05:53.940 "bdev_nvme_apply_firmware", 00:05:53.940 "bdev_nvme_detach_controller", 00:05:53.940 "bdev_nvme_get_controllers", 00:05:53.940 "bdev_nvme_attach_controller", 00:05:53.940 "bdev_nvme_set_hotplug", 00:05:53.940 "bdev_nvme_set_options", 00:05:53.940 "bdev_passthru_delete", 00:05:53.940 "bdev_passthru_create", 00:05:53.940 "bdev_lvol_set_parent_bdev", 00:05:53.940 "bdev_lvol_set_parent", 00:05:53.940 "bdev_lvol_check_shallow_copy", 00:05:53.940 "bdev_lvol_start_shallow_copy", 00:05:53.940 "bdev_lvol_grow_lvstore", 00:05:53.940 "bdev_lvol_get_lvols", 00:05:53.940 "bdev_lvol_get_lvstores", 00:05:53.940 "bdev_lvol_delete", 00:05:53.940 "bdev_lvol_set_read_only", 00:05:53.940 "bdev_lvol_resize", 00:05:53.940 "bdev_lvol_decouple_parent", 00:05:53.940 "bdev_lvol_inflate", 00:05:53.940 "bdev_lvol_rename", 00:05:53.940 "bdev_lvol_clone_bdev", 00:05:53.940 "bdev_lvol_clone", 00:05:53.940 "bdev_lvol_snapshot", 00:05:53.940 "bdev_lvol_create", 00:05:53.940 "bdev_lvol_delete_lvstore", 00:05:53.940 "bdev_lvol_rename_lvstore", 00:05:53.940 "bdev_lvol_create_lvstore", 00:05:53.940 "bdev_raid_set_options", 00:05:53.940 "bdev_raid_remove_base_bdev", 00:05:53.940 "bdev_raid_add_base_bdev", 00:05:53.940 "bdev_raid_delete", 00:05:53.940 "bdev_raid_create", 00:05:53.940 "bdev_raid_get_bdevs", 00:05:53.940 "bdev_error_inject_error", 00:05:53.940 "bdev_error_delete", 00:05:53.940 "bdev_error_create", 00:05:53.940 "bdev_split_delete", 00:05:53.940 "bdev_split_create", 00:05:53.940 "bdev_delay_delete", 00:05:53.940 "bdev_delay_create", 00:05:53.940 "bdev_delay_update_latency", 00:05:53.940 "bdev_zone_block_delete", 00:05:53.941 "bdev_zone_block_create", 00:05:53.941 "blobfs_create", 00:05:53.941 "blobfs_detect", 00:05:53.941 "blobfs_set_cache_size", 00:05:53.941 "bdev_aio_delete", 00:05:53.941 "bdev_aio_rescan", 00:05:53.941 "bdev_aio_create", 00:05:53.941 "bdev_ftl_set_property", 00:05:53.941 "bdev_ftl_get_properties", 00:05:53.941 "bdev_ftl_get_stats", 00:05:53.941 "bdev_ftl_unmap", 00:05:53.941 "bdev_ftl_unload", 00:05:53.941 "bdev_ftl_delete", 00:05:53.941 "bdev_ftl_load", 00:05:53.941 "bdev_ftl_create", 00:05:53.941 "bdev_virtio_attach_controller", 00:05:53.941 "bdev_virtio_scsi_get_devices", 00:05:53.941 "bdev_virtio_detach_controller", 00:05:53.941 "bdev_virtio_blk_set_hotplug", 00:05:53.941 "bdev_iscsi_delete", 00:05:53.941 "bdev_iscsi_create", 00:05:53.941 "bdev_iscsi_set_options", 00:05:53.941 "accel_error_inject_error", 00:05:53.941 "ioat_scan_accel_module", 00:05:53.941 "dsa_scan_accel_module", 00:05:53.941 "iaa_scan_accel_module", 00:05:53.941 "keyring_file_remove_key", 00:05:53.941 "keyring_file_add_key", 00:05:53.941 "keyring_linux_set_options", 00:05:53.941 "fsdev_aio_delete", 00:05:53.941 "fsdev_aio_create", 00:05:53.941 "iscsi_get_histogram", 00:05:53.941 "iscsi_enable_histogram", 00:05:53.941 "iscsi_set_options", 00:05:53.941 "iscsi_get_auth_groups", 00:05:53.941 "iscsi_auth_group_remove_secret", 00:05:53.941 "iscsi_auth_group_add_secret", 00:05:53.941 "iscsi_delete_auth_group", 00:05:53.941 "iscsi_create_auth_group", 00:05:53.941 "iscsi_set_discovery_auth", 00:05:53.941 "iscsi_get_options", 00:05:53.941 "iscsi_target_node_request_logout", 00:05:53.941 "iscsi_target_node_set_redirect", 00:05:53.941 "iscsi_target_node_set_auth", 00:05:53.941 "iscsi_target_node_add_lun", 00:05:53.941 "iscsi_get_stats", 00:05:53.941 "iscsi_get_connections", 00:05:53.941 "iscsi_portal_group_set_auth", 00:05:53.941 "iscsi_start_portal_group", 00:05:53.941 "iscsi_delete_portal_group", 00:05:53.941 "iscsi_create_portal_group", 00:05:53.941 "iscsi_get_portal_groups", 00:05:53.941 "iscsi_delete_target_node", 00:05:53.941 "iscsi_target_node_remove_pg_ig_maps", 00:05:53.941 "iscsi_target_node_add_pg_ig_maps", 00:05:53.941 "iscsi_create_target_node", 00:05:53.941 "iscsi_get_target_nodes", 00:05:53.941 "iscsi_delete_initiator_group", 00:05:53.941 "iscsi_initiator_group_remove_initiators", 00:05:53.941 "iscsi_initiator_group_add_initiators", 00:05:53.941 "iscsi_create_initiator_group", 00:05:53.941 "iscsi_get_initiator_groups", 00:05:53.941 "nvmf_set_crdt", 00:05:53.941 "nvmf_set_config", 00:05:53.941 "nvmf_set_max_subsystems", 00:05:53.941 "nvmf_stop_mdns_prr", 00:05:53.941 "nvmf_publish_mdns_prr", 00:05:53.941 "nvmf_subsystem_get_listeners", 00:05:53.941 "nvmf_subsystem_get_qpairs", 00:05:53.941 "nvmf_subsystem_get_controllers", 00:05:53.941 "nvmf_get_stats", 00:05:53.941 "nvmf_get_transports", 00:05:53.941 "nvmf_create_transport", 00:05:53.941 "nvmf_get_targets", 00:05:53.941 "nvmf_delete_target", 00:05:53.941 "nvmf_create_target", 00:05:53.941 "nvmf_subsystem_allow_any_host", 00:05:53.941 "nvmf_subsystem_set_keys", 00:05:53.941 "nvmf_subsystem_remove_host", 00:05:53.941 "nvmf_subsystem_add_host", 00:05:53.941 "nvmf_ns_remove_host", 00:05:53.941 "nvmf_ns_add_host", 00:05:53.941 "nvmf_subsystem_remove_ns", 00:05:53.941 "nvmf_subsystem_set_ns_ana_group", 00:05:53.941 "nvmf_subsystem_add_ns", 00:05:53.941 "nvmf_subsystem_listener_set_ana_state", 00:05:53.941 "nvmf_discovery_get_referrals", 00:05:53.941 "nvmf_discovery_remove_referral", 00:05:53.941 "nvmf_discovery_add_referral", 00:05:53.941 "nvmf_subsystem_remove_listener", 00:05:53.941 "nvmf_subsystem_add_listener", 00:05:53.941 "nvmf_delete_subsystem", 00:05:53.941 "nvmf_create_subsystem", 00:05:53.941 "nvmf_get_subsystems", 00:05:53.941 "env_dpdk_get_mem_stats", 00:05:53.941 "nbd_get_disks", 00:05:53.941 "nbd_stop_disk", 00:05:53.941 "nbd_start_disk", 00:05:53.941 "ublk_recover_disk", 00:05:53.941 "ublk_get_disks", 00:05:53.941 "ublk_stop_disk", 00:05:53.941 "ublk_start_disk", 00:05:53.941 "ublk_destroy_target", 00:05:53.941 "ublk_create_target", 00:05:53.941 "virtio_blk_create_transport", 00:05:53.941 "virtio_blk_get_transports", 00:05:53.941 "vhost_controller_set_coalescing", 00:05:53.941 "vhost_get_controllers", 00:05:53.941 "vhost_delete_controller", 00:05:53.941 "vhost_create_blk_controller", 00:05:53.941 "vhost_scsi_controller_remove_target", 00:05:53.941 "vhost_scsi_controller_add_target", 00:05:53.941 "vhost_start_scsi_controller", 00:05:53.941 "vhost_create_scsi_controller", 00:05:53.941 "thread_set_cpumask", 00:05:53.941 "scheduler_set_options", 00:05:53.941 "framework_get_governor", 00:05:53.941 "framework_get_scheduler", 00:05:53.941 "framework_set_scheduler", 00:05:53.941 "framework_get_reactors", 00:05:53.941 "thread_get_io_channels", 00:05:53.941 "thread_get_pollers", 00:05:53.941 "thread_get_stats", 00:05:53.941 "framework_monitor_context_switch", 00:05:53.941 "spdk_kill_instance", 00:05:53.941 "log_enable_timestamps", 00:05:53.941 "log_get_flags", 00:05:53.941 "log_clear_flag", 00:05:53.941 "log_set_flag", 00:05:53.941 "log_get_level", 00:05:53.941 "log_set_level", 00:05:53.941 "log_get_print_level", 00:05:53.941 "log_set_print_level", 00:05:53.941 "framework_enable_cpumask_locks", 00:05:53.941 "framework_disable_cpumask_locks", 00:05:53.941 "framework_wait_init", 00:05:53.941 "framework_start_init", 00:05:53.941 "scsi_get_devices", 00:05:53.941 "bdev_get_histogram", 00:05:53.941 "bdev_enable_histogram", 00:05:53.941 "bdev_set_qos_limit", 00:05:53.941 "bdev_set_qd_sampling_period", 00:05:53.941 "bdev_get_bdevs", 00:05:53.941 "bdev_reset_iostat", 00:05:53.941 "bdev_get_iostat", 00:05:53.941 "bdev_examine", 00:05:53.941 "bdev_wait_for_examine", 00:05:53.941 "bdev_set_options", 00:05:53.941 "accel_get_stats", 00:05:53.941 "accel_set_options", 00:05:53.941 "accel_set_driver", 00:05:53.941 "accel_crypto_key_destroy", 00:05:53.941 "accel_crypto_keys_get", 00:05:53.941 "accel_crypto_key_create", 00:05:53.941 "accel_assign_opc", 00:05:53.941 "accel_get_module_info", 00:05:53.941 "accel_get_opc_assignments", 00:05:53.941 "vmd_rescan", 00:05:53.941 "vmd_remove_device", 00:05:53.941 "vmd_enable", 00:05:53.941 "sock_get_default_impl", 00:05:53.941 "sock_set_default_impl", 00:05:53.941 "sock_impl_set_options", 00:05:53.941 "sock_impl_get_options", 00:05:53.941 "iobuf_get_stats", 00:05:53.941 "iobuf_set_options", 00:05:53.941 "keyring_get_keys", 00:05:53.941 "framework_get_pci_devices", 00:05:53.941 "framework_get_config", 00:05:53.941 "framework_get_subsystems", 00:05:53.941 "fsdev_set_opts", 00:05:53.941 "fsdev_get_opts", 00:05:53.941 "trace_get_info", 00:05:53.941 "trace_get_tpoint_group_mask", 00:05:53.941 "trace_disable_tpoint_group", 00:05:53.941 "trace_enable_tpoint_group", 00:05:53.941 "trace_clear_tpoint_mask", 00:05:53.941 "trace_set_tpoint_mask", 00:05:53.941 "notify_get_notifications", 00:05:53.941 "notify_get_types", 00:05:53.941 "spdk_get_version", 00:05:53.941 "rpc_get_methods" 00:05:53.941 ] 00:05:54.200 12:23:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.200 12:23:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:54.200 12:23:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57788 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57788 ']' 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57788 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57788 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.200 killing process with pid 57788 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57788' 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57788 00:05:54.200 12:23:05 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57788 00:05:56.735 00:05:56.735 real 0m4.226s 00:05:56.735 user 0m7.299s 00:05:56.735 sys 0m0.618s 00:05:56.735 12:23:08 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.735 12:23:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.735 ************************************ 00:05:56.735 END TEST spdkcli_tcp 00:05:56.735 ************************************ 00:05:56.735 12:23:08 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.735 12:23:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.735 12:23:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.735 12:23:08 -- common/autotest_common.sh@10 -- # set +x 00:05:56.735 ************************************ 00:05:56.735 START TEST dpdk_mem_utility 00:05:56.735 ************************************ 00:05:56.735 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.735 * Looking for test storage... 00:05:56.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:56.735 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.735 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.735 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.735 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.735 12:23:08 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.735 12:23:08 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.735 12:23:08 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.735 12:23:08 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.735 12:23:08 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.735 12:23:08 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.735 12:23:08 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.735 12:23:08 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.735 12:23:08 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:56.994 12:23:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:56.995 12:23:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.995 12:23:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:56.995 12:23:08 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.995 12:23:08 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.995 12:23:08 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.995 12:23:08 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:56.995 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.995 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.995 --rc genhtml_branch_coverage=1 00:05:56.995 --rc genhtml_function_coverage=1 00:05:56.995 --rc genhtml_legend=1 00:05:56.995 --rc geninfo_all_blocks=1 00:05:56.995 --rc geninfo_unexecuted_blocks=1 00:05:56.995 00:05:56.995 ' 00:05:56.995 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.995 --rc genhtml_branch_coverage=1 00:05:56.995 --rc genhtml_function_coverage=1 00:05:56.995 --rc genhtml_legend=1 00:05:56.995 --rc geninfo_all_blocks=1 00:05:56.995 --rc geninfo_unexecuted_blocks=1 00:05:56.995 00:05:56.995 ' 00:05:56.995 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.995 --rc genhtml_branch_coverage=1 00:05:56.995 --rc genhtml_function_coverage=1 00:05:56.995 --rc genhtml_legend=1 00:05:56.995 --rc geninfo_all_blocks=1 00:05:56.995 --rc geninfo_unexecuted_blocks=1 00:05:56.995 00:05:56.995 ' 00:05:56.995 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.995 --rc genhtml_branch_coverage=1 00:05:56.995 --rc genhtml_function_coverage=1 00:05:56.995 --rc genhtml_legend=1 00:05:56.995 --rc geninfo_all_blocks=1 00:05:56.995 --rc geninfo_unexecuted_blocks=1 00:05:56.995 00:05:56.995 ' 00:05:56.995 12:23:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:56.995 12:23:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57910 00:05:56.995 12:23:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.995 12:23:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57910 00:05:56.995 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57910 ']' 00:05:56.995 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.995 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.995 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.995 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.995 12:23:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.995 [2024-09-30 12:23:08.746485] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:56.995 [2024-09-30 12:23:08.746606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57910 ] 00:05:57.254 [2024-09-30 12:23:08.907972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.254 [2024-09-30 12:23:09.104280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.192 12:23:09 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.192 12:23:09 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:58.192 12:23:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:58.192 12:23:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:58.192 12:23:09 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.192 12:23:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.192 { 00:05:58.192 "filename": "/tmp/spdk_mem_dump.txt" 00:05:58.192 } 00:05:58.192 12:23:09 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.192 12:23:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:58.192 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:58.192 1 heaps totaling size 866.000000 MiB 00:05:58.193 size: 866.000000 MiB heap id: 0 00:05:58.193 end heaps---------- 00:05:58.193 9 mempools totaling size 642.649841 MiB 00:05:58.193 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:58.193 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:58.193 size: 92.545471 MiB name: bdev_io_57910 00:05:58.193 size: 51.011292 MiB name: evtpool_57910 00:05:58.193 size: 50.003479 MiB name: msgpool_57910 00:05:58.193 size: 36.509338 MiB name: fsdev_io_57910 00:05:58.193 size: 21.763794 MiB name: PDU_Pool 00:05:58.193 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:58.193 size: 0.026123 MiB name: Session_Pool 00:05:58.193 end mempools------- 00:05:58.193 6 memzones totaling size 4.142822 MiB 00:05:58.193 size: 1.000366 MiB name: RG_ring_0_57910 00:05:58.193 size: 1.000366 MiB name: RG_ring_1_57910 00:05:58.193 size: 1.000366 MiB name: RG_ring_4_57910 00:05:58.193 size: 1.000366 MiB name: RG_ring_5_57910 00:05:58.193 size: 0.125366 MiB name: RG_ring_2_57910 00:05:58.193 size: 0.015991 MiB name: RG_ring_3_57910 00:05:58.193 end memzones------- 00:05:58.193 12:23:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:58.193 heap id: 0 total size: 866.000000 MiB number of busy elements: 314 number of free elements: 19 00:05:58.193 list of free elements. size: 19.913818 MiB 00:05:58.193 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:58.193 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:58.193 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:58.193 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:58.193 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:58.193 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:58.193 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:58.193 element at address: 0x20001c400000 with size: 0.999084 MiB 00:05:58.193 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:58.193 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:05:58.193 element at address: 0x20001c700040 with size: 0.936401 MiB 00:05:58.193 element at address: 0x200000200000 with size: 0.832153 MiB 00:05:58.193 element at address: 0x20001de00000 with size: 0.562195 MiB 00:05:58.193 element at address: 0x200003e00000 with size: 0.490662 MiB 00:05:58.193 element at address: 0x20001c000000 with size: 0.488220 MiB 00:05:58.193 element at address: 0x20001c800000 with size: 0.485413 MiB 00:05:58.193 element at address: 0x200015e00000 with size: 0.443237 MiB 00:05:58.193 element at address: 0x20002b200000 with size: 0.390442 MiB 00:05:58.193 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:58.193 list of standard malloc elements. size: 199.287476 MiB 00:05:58.193 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:58.193 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:58.193 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:58.193 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:58.193 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:58.193 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:58.193 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:58.193 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:58.193 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:05:58.193 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:05:58.193 element at address: 0x200015dff040 with size: 0.000305 MiB 00:05:58.193 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:58.193 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003aff700 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:05:58.193 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:58.193 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:05:58.193 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:05:58.193 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:05:58.193 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:05:58.193 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:05:58.193 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dff180 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dff280 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dff380 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dff480 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dff580 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dff680 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dff780 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dff880 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dff980 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015e71780 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015e71880 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015e71980 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015e72080 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015e72180 with size: 0.000244 MiB 00:05:58.194 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c07cfc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c07d0c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20002b264040 with size: 0.000244 MiB 00:05:58.194 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:05:58.195 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:05:58.195 list of memzone associated elements. size: 646.798706 MiB 00:05:58.195 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:58.195 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:58.195 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:58.195 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:58.195 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:58.195 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57910_0 00:05:58.195 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:58.195 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57910_0 00:05:58.195 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:58.195 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57910_0 00:05:58.195 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:58.195 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57910_0 00:05:58.195 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:58.195 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:58.195 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:58.195 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:58.195 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:58.195 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57910 00:05:58.195 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:58.195 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57910 00:05:58.195 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:58.195 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57910 00:05:58.195 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:58.195 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:58.195 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:58.195 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:58.195 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:58.195 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:58.195 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:58.195 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:58.195 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:58.195 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57910 00:05:58.195 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:58.195 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57910 00:05:58.195 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:58.195 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57910 00:05:58.195 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:58.195 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57910 00:05:58.195 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:05:58.195 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57910 00:05:58.195 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:05:58.195 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57910 00:05:58.195 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:05:58.195 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:58.195 element at address: 0x200015e72280 with size: 0.500549 MiB 00:05:58.195 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:58.195 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:05:58.195 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:58.195 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:05:58.195 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57910 00:05:58.195 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:05:58.195 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:58.195 element at address: 0x20002b264140 with size: 0.023804 MiB 00:05:58.195 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:58.196 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:05:58.196 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57910 00:05:58.196 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:05:58.196 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:58.196 element at address: 0x2000002d6180 with size: 0.000366 MiB 00:05:58.196 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57910 00:05:58.196 element at address: 0x200003aff800 with size: 0.000366 MiB 00:05:58.196 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57910 00:05:58.196 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:58.196 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57910 00:05:58.196 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:05:58.196 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:58.196 12:23:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:58.196 12:23:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57910 00:05:58.196 12:23:10 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57910 ']' 00:05:58.196 12:23:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57910 00:05:58.196 12:23:10 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:58.196 12:23:10 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.196 12:23:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57910 00:05:58.196 12:23:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.196 12:23:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.196 killing process with pid 57910 00:05:58.196 12:23:10 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57910' 00:05:58.196 12:23:10 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57910 00:05:58.196 12:23:10 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57910 00:06:00.746 00:06:00.746 real 0m4.010s 00:06:00.746 user 0m3.877s 00:06:00.746 sys 0m0.585s 00:06:00.746 12:23:12 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.746 12:23:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.746 ************************************ 00:06:00.746 END TEST dpdk_mem_utility 00:06:00.746 ************************************ 00:06:00.746 12:23:12 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:00.746 12:23:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.746 12:23:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.746 12:23:12 -- common/autotest_common.sh@10 -- # set +x 00:06:00.746 ************************************ 00:06:00.746 START TEST event 00:06:00.746 ************************************ 00:06:00.746 12:23:12 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:00.746 * Looking for test storage... 00:06:00.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:00.746 12:23:12 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.746 12:23:12 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.746 12:23:12 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:01.007 12:23:12 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:01.007 12:23:12 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.007 12:23:12 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.007 12:23:12 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.007 12:23:12 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.007 12:23:12 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.007 12:23:12 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.007 12:23:12 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.007 12:23:12 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.007 12:23:12 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.007 12:23:12 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.007 12:23:12 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.007 12:23:12 event -- scripts/common.sh@344 -- # case "$op" in 00:06:01.007 12:23:12 event -- scripts/common.sh@345 -- # : 1 00:06:01.007 12:23:12 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.007 12:23:12 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.007 12:23:12 event -- scripts/common.sh@365 -- # decimal 1 00:06:01.007 12:23:12 event -- scripts/common.sh@353 -- # local d=1 00:06:01.007 12:23:12 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.007 12:23:12 event -- scripts/common.sh@355 -- # echo 1 00:06:01.007 12:23:12 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.007 12:23:12 event -- scripts/common.sh@366 -- # decimal 2 00:06:01.007 12:23:12 event -- scripts/common.sh@353 -- # local d=2 00:06:01.007 12:23:12 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.007 12:23:12 event -- scripts/common.sh@355 -- # echo 2 00:06:01.007 12:23:12 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.007 12:23:12 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.007 12:23:12 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.007 12:23:12 event -- scripts/common.sh@368 -- # return 0 00:06:01.007 12:23:12 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.007 12:23:12 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:01.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.007 --rc genhtml_branch_coverage=1 00:06:01.007 --rc genhtml_function_coverage=1 00:06:01.007 --rc genhtml_legend=1 00:06:01.007 --rc geninfo_all_blocks=1 00:06:01.007 --rc geninfo_unexecuted_blocks=1 00:06:01.007 00:06:01.007 ' 00:06:01.007 12:23:12 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:01.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.007 --rc genhtml_branch_coverage=1 00:06:01.007 --rc genhtml_function_coverage=1 00:06:01.007 --rc genhtml_legend=1 00:06:01.007 --rc geninfo_all_blocks=1 00:06:01.007 --rc geninfo_unexecuted_blocks=1 00:06:01.007 00:06:01.007 ' 00:06:01.007 12:23:12 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:01.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.007 --rc genhtml_branch_coverage=1 00:06:01.007 --rc genhtml_function_coverage=1 00:06:01.007 --rc genhtml_legend=1 00:06:01.007 --rc geninfo_all_blocks=1 00:06:01.007 --rc geninfo_unexecuted_blocks=1 00:06:01.007 00:06:01.007 ' 00:06:01.007 12:23:12 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:01.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.007 --rc genhtml_branch_coverage=1 00:06:01.007 --rc genhtml_function_coverage=1 00:06:01.007 --rc genhtml_legend=1 00:06:01.007 --rc geninfo_all_blocks=1 00:06:01.007 --rc geninfo_unexecuted_blocks=1 00:06:01.007 00:06:01.007 ' 00:06:01.007 12:23:12 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:01.007 12:23:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:01.007 12:23:12 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:01.007 12:23:12 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:01.007 12:23:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.007 12:23:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.007 ************************************ 00:06:01.007 START TEST event_perf 00:06:01.007 ************************************ 00:06:01.007 12:23:12 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:01.007 Running I/O for 1 seconds...[2024-09-30 12:23:12.788468] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:01.007 [2024-09-30 12:23:12.788574] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58018 ] 00:06:01.267 [2024-09-30 12:23:12.953219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.267 [2024-09-30 12:23:13.142433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.267 [2024-09-30 12:23:13.142691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.267 Running I/O for 1 seconds...[2024-09-30 12:23:13.143371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.267 [2024-09-30 12:23:13.143401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.648 00:06:02.648 lcore 0: 93788 00:06:02.648 lcore 1: 93791 00:06:02.648 lcore 2: 93788 00:06:02.648 lcore 3: 93791 00:06:02.648 done. 00:06:02.907 00:06:02.907 real 0m1.823s 00:06:02.907 user 0m4.571s 00:06:02.907 sys 0m0.128s 00:06:02.907 12:23:14 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.907 12:23:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.907 ************************************ 00:06:02.907 END TEST event_perf 00:06:02.907 ************************************ 00:06:02.907 12:23:14 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:02.907 12:23:14 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:02.907 12:23:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.907 12:23:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.907 ************************************ 00:06:02.907 START TEST event_reactor 00:06:02.907 ************************************ 00:06:02.907 12:23:14 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:02.907 [2024-09-30 12:23:14.676371] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:02.907 [2024-09-30 12:23:14.676795] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58058 ] 00:06:03.166 [2024-09-30 12:23:14.839453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.425 [2024-09-30 12:23:15.088184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.805 test_start 00:06:04.805 oneshot 00:06:04.805 tick 100 00:06:04.805 tick 100 00:06:04.805 tick 250 00:06:04.805 tick 100 00:06:04.805 tick 100 00:06:04.805 tick 100 00:06:04.805 tick 250 00:06:04.805 tick 500 00:06:04.805 tick 100 00:06:04.805 tick 100 00:06:04.805 tick 250 00:06:04.805 tick 100 00:06:04.805 tick 100 00:06:04.805 test_end 00:06:04.805 00:06:04.805 real 0m1.816s 00:06:04.805 user 0m1.598s 00:06:04.805 sys 0m0.109s 00:06:04.805 12:23:16 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.805 12:23:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:04.805 ************************************ 00:06:04.805 END TEST event_reactor 00:06:04.805 ************************************ 00:06:04.805 12:23:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.805 12:23:16 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:04.805 12:23:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.805 12:23:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.805 ************************************ 00:06:04.805 START TEST event_reactor_perf 00:06:04.805 ************************************ 00:06:04.805 12:23:16 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.805 [2024-09-30 12:23:16.554275] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:04.805 [2024-09-30 12:23:16.554375] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58100 ] 00:06:05.064 [2024-09-30 12:23:16.716009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.064 [2024-09-30 12:23:16.915262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.445 test_start 00:06:06.445 test_end 00:06:06.445 Performance: 404496 events per second 00:06:06.445 00:06:06.445 real 0m1.778s 00:06:06.445 user 0m1.559s 00:06:06.445 sys 0m0.110s 00:06:06.445 12:23:18 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.445 12:23:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.445 ************************************ 00:06:06.445 END TEST event_reactor_perf 00:06:06.445 ************************************ 00:06:06.445 12:23:18 event -- event/event.sh@49 -- # uname -s 00:06:06.705 12:23:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:06.705 12:23:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:06.705 12:23:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.705 12:23:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.705 12:23:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.705 ************************************ 00:06:06.705 START TEST event_scheduler 00:06:06.705 ************************************ 00:06:06.705 12:23:18 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:06.705 * Looking for test storage... 00:06:06.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:06.705 12:23:18 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.705 12:23:18 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.705 12:23:18 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.705 12:23:18 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.705 12:23:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.705 12:23:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.705 12:23:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.705 12:23:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.706 12:23:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:06.706 12:23:18 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.706 12:23:18 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.706 --rc genhtml_branch_coverage=1 00:06:06.706 --rc genhtml_function_coverage=1 00:06:06.706 --rc genhtml_legend=1 00:06:06.706 --rc geninfo_all_blocks=1 00:06:06.706 --rc geninfo_unexecuted_blocks=1 00:06:06.706 00:06:06.706 ' 00:06:06.706 12:23:18 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.706 --rc genhtml_branch_coverage=1 00:06:06.706 --rc genhtml_function_coverage=1 00:06:06.706 --rc genhtml_legend=1 00:06:06.706 --rc geninfo_all_blocks=1 00:06:06.706 --rc geninfo_unexecuted_blocks=1 00:06:06.706 00:06:06.706 ' 00:06:06.706 12:23:18 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.706 --rc genhtml_branch_coverage=1 00:06:06.706 --rc genhtml_function_coverage=1 00:06:06.706 --rc genhtml_legend=1 00:06:06.706 --rc geninfo_all_blocks=1 00:06:06.706 --rc geninfo_unexecuted_blocks=1 00:06:06.706 00:06:06.706 ' 00:06:06.706 12:23:18 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.706 --rc genhtml_branch_coverage=1 00:06:06.706 --rc genhtml_function_coverage=1 00:06:06.706 --rc genhtml_legend=1 00:06:06.706 --rc geninfo_all_blocks=1 00:06:06.706 --rc geninfo_unexecuted_blocks=1 00:06:06.706 00:06:06.706 ' 00:06:06.706 12:23:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:06.706 12:23:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58176 00:06:06.706 12:23:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:06.706 12:23:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.706 12:23:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58176 00:06:06.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.706 12:23:18 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58176 ']' 00:06:06.706 12:23:18 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.706 12:23:18 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.706 12:23:18 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.706 12:23:18 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.706 12:23:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.966 [2024-09-30 12:23:18.663877] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:06.966 [2024-09-30 12:23:18.663990] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58176 ] 00:06:06.966 [2024-09-30 12:23:18.816815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.234 [2024-09-30 12:23:19.082101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.234 [2024-09-30 12:23:19.082293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.234 [2024-09-30 12:23:19.083005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.234 [2024-09-30 12:23:19.083050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.807 12:23:19 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.807 12:23:19 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:07.807 12:23:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:07.807 12:23:19 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.807 12:23:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.807 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.807 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.807 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.807 POWER: Cannot set governor of lcore 0 to performance 00:06:07.807 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.807 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.807 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.807 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.807 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:07.807 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:07.807 POWER: Unable to set Power Management Environment for lcore 0 00:06:07.807 [2024-09-30 12:23:19.532368] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:07.807 [2024-09-30 12:23:19.532396] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:07.807 [2024-09-30 12:23:19.532407] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:07.807 [2024-09-30 12:23:19.532428] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:07.807 [2024-09-30 12:23:19.532436] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:07.807 [2024-09-30 12:23:19.532446] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:07.807 12:23:19 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.807 12:23:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:07.807 12:23:19 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.807 12:23:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.067 [2024-09-30 12:23:19.914981] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:08.067 12:23:19 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.067 12:23:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:08.067 12:23:19 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.067 12:23:19 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.067 12:23:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.067 ************************************ 00:06:08.067 START TEST scheduler_create_thread 00:06:08.067 ************************************ 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.067 2 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.067 3 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.067 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.326 4 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.326 5 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.326 6 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.326 12:23:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.326 7 00:06:08.326 12:23:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.326 12:23:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:08.326 12:23:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.326 12:23:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.326 8 00:06:08.326 12:23:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.326 12:23:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:08.326 12:23:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.326 12:23:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.326 9 00:06:08.327 12:23:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.327 12:23:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:08.327 12:23:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.327 12:23:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.705 10 00:06:09.705 12:23:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.705 12:23:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:09.705 12:23:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.705 12:23:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.273 12:23:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.273 12:23:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.273 12:23:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.273 12:23:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.273 12:23:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.210 12:23:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.210 12:23:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:11.210 12:23:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.210 12:23:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.779 12:23:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.779 12:23:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:11.779 12:23:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:11.779 12:23:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.779 12:23:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.348 12:23:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.348 00:06:12.348 real 0m4.206s 00:06:12.348 user 0m0.023s 00:06:12.348 sys 0m0.011s 00:06:12.348 12:23:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.348 12:23:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.348 ************************************ 00:06:12.348 END TEST scheduler_create_thread 00:06:12.348 ************************************ 00:06:12.348 12:23:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:12.348 12:23:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58176 00:06:12.348 12:23:24 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58176 ']' 00:06:12.348 12:23:24 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58176 00:06:12.348 12:23:24 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:12.348 12:23:24 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.348 12:23:24 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58176 00:06:12.348 12:23:24 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:12.348 12:23:24 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:12.348 killing process with pid 58176 00:06:12.348 12:23:24 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58176' 00:06:12.348 12:23:24 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58176 00:06:12.348 12:23:24 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58176 00:06:12.608 [2024-09-30 12:23:24.415287] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:13.988 ************************************ 00:06:13.988 END TEST event_scheduler 00:06:13.988 ************************************ 00:06:13.988 00:06:13.988 real 0m7.476s 00:06:13.988 user 0m16.631s 00:06:13.988 sys 0m0.592s 00:06:13.988 12:23:25 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.988 12:23:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.248 12:23:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:14.248 12:23:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:14.248 12:23:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.248 12:23:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.248 12:23:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.248 ************************************ 00:06:14.248 START TEST app_repeat 00:06:14.248 ************************************ 00:06:14.248 12:23:25 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58304 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.248 Process app_repeat pid: 58304 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58304' 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.248 spdk_app_start Round 0 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:14.248 12:23:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58304 /var/tmp/spdk-nbd.sock 00:06:14.248 12:23:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58304 ']' 00:06:14.248 12:23:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.248 12:23:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.248 12:23:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.248 12:23:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.248 12:23:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.248 [2024-09-30 12:23:25.981628] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:14.248 [2024-09-30 12:23:25.981770] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58304 ] 00:06:14.509 [2024-09-30 12:23:26.147489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.509 [2024-09-30 12:23:26.352432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.509 [2024-09-30 12:23:26.352460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.077 12:23:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.077 12:23:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:15.077 12:23:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.336 Malloc0 00:06:15.336 12:23:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.595 Malloc1 00:06:15.595 12:23:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.595 12:23:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.854 /dev/nbd0 00:06:15.854 12:23:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.854 12:23:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.854 1+0 records in 00:06:15.854 1+0 records out 00:06:15.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428238 s, 9.6 MB/s 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.854 12:23:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.854 12:23:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.854 12:23:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.854 12:23:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.113 /dev/nbd1 00:06:16.113 12:23:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.113 12:23:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.113 1+0 records in 00:06:16.113 1+0 records out 00:06:16.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250064 s, 16.4 MB/s 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.113 12:23:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:16.113 12:23:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.113 12:23:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.113 12:23:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.113 12:23:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.113 12:23:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.372 { 00:06:16.372 "nbd_device": "/dev/nbd0", 00:06:16.372 "bdev_name": "Malloc0" 00:06:16.372 }, 00:06:16.372 { 00:06:16.372 "nbd_device": "/dev/nbd1", 00:06:16.372 "bdev_name": "Malloc1" 00:06:16.372 } 00:06:16.372 ]' 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.372 { 00:06:16.372 "nbd_device": "/dev/nbd0", 00:06:16.372 "bdev_name": "Malloc0" 00:06:16.372 }, 00:06:16.372 { 00:06:16.372 "nbd_device": "/dev/nbd1", 00:06:16.372 "bdev_name": "Malloc1" 00:06:16.372 } 00:06:16.372 ]' 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.372 /dev/nbd1' 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.372 /dev/nbd1' 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.372 256+0 records in 00:06:16.372 256+0 records out 00:06:16.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113353 s, 92.5 MB/s 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.372 256+0 records in 00:06:16.372 256+0 records out 00:06:16.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233937 s, 44.8 MB/s 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.372 256+0 records in 00:06:16.372 256+0 records out 00:06:16.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241335 s, 43.4 MB/s 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.372 12:23:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.373 12:23:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.632 12:23:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.632 12:23:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.632 12:23:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.632 12:23:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.632 12:23:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.632 12:23:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.632 12:23:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.632 12:23:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.632 12:23:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.632 12:23:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.892 12:23:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.892 12:23:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.892 12:23:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.892 12:23:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.892 12:23:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.892 12:23:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.892 12:23:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.892 12:23:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.892 12:23:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.892 12:23:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.892 12:23:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.150 12:23:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.150 12:23:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.150 12:23:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.150 12:23:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.150 12:23:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.151 12:23:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.151 12:23:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.151 12:23:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.151 12:23:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.151 12:23:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.151 12:23:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.151 12:23:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.151 12:23:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.718 12:23:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.096 [2024-09-30 12:23:30.579154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.096 [2024-09-30 12:23:30.809991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.096 [2024-09-30 12:23:30.809994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.355 [2024-09-30 12:23:31.029509] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.355 [2024-09-30 12:23:31.029581] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.733 12:23:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.733 spdk_app_start Round 1 00:06:20.733 12:23:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:20.733 12:23:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58304 /var/tmp/spdk-nbd.sock 00:06:20.733 12:23:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58304 ']' 00:06:20.733 12:23:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.733 12:23:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.733 12:23:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.733 12:23:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.733 12:23:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.733 12:23:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.733 12:23:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:20.733 12:23:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.992 Malloc0 00:06:20.992 12:23:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.250 Malloc1 00:06:21.250 12:23:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.250 12:23:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.250 12:23:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.250 12:23:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.250 12:23:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.250 12:23:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.250 12:23:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.250 12:23:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.250 12:23:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.250 12:23:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.251 12:23:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.251 12:23:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.251 12:23:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.251 12:23:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.251 12:23:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.251 12:23:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.510 /dev/nbd0 00:06:21.510 12:23:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.510 12:23:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.510 1+0 records in 00:06:21.510 1+0 records out 00:06:21.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407796 s, 10.0 MB/s 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:21.510 12:23:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:21.510 12:23:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.510 12:23:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.510 12:23:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.770 /dev/nbd1 00:06:21.770 12:23:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.770 12:23:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.770 1+0 records in 00:06:21.770 1+0 records out 00:06:21.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036091 s, 11.3 MB/s 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:21.770 12:23:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:21.770 12:23:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.770 12:23:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.770 12:23:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.770 12:23:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.770 12:23:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.029 { 00:06:22.029 "nbd_device": "/dev/nbd0", 00:06:22.029 "bdev_name": "Malloc0" 00:06:22.029 }, 00:06:22.029 { 00:06:22.029 "nbd_device": "/dev/nbd1", 00:06:22.029 "bdev_name": "Malloc1" 00:06:22.029 } 00:06:22.029 ]' 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.029 { 00:06:22.029 "nbd_device": "/dev/nbd0", 00:06:22.029 "bdev_name": "Malloc0" 00:06:22.029 }, 00:06:22.029 { 00:06:22.029 "nbd_device": "/dev/nbd1", 00:06:22.029 "bdev_name": "Malloc1" 00:06:22.029 } 00:06:22.029 ]' 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.029 /dev/nbd1' 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.029 /dev/nbd1' 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.029 256+0 records in 00:06:22.029 256+0 records out 00:06:22.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110131 s, 95.2 MB/s 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.029 256+0 records in 00:06:22.029 256+0 records out 00:06:22.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253894 s, 41.3 MB/s 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.029 12:23:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.288 256+0 records in 00:06:22.288 256+0 records out 00:06:22.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029012 s, 36.1 MB/s 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.288 12:23:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.288 12:23:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.288 12:23:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.288 12:23:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.288 12:23:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.288 12:23:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.288 12:23:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.288 12:23:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.289 12:23:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.289 12:23:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.289 12:23:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.547 12:23:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.547 12:23:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.547 12:23:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.547 12:23:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.547 12:23:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.547 12:23:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.547 12:23:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.547 12:23:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.547 12:23:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.547 12:23:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.547 12:23:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.806 12:23:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.806 12:23:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.806 12:23:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.807 12:23:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.807 12:23:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.807 12:23:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.807 12:23:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.807 12:23:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.807 12:23:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.807 12:23:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.807 12:23:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.807 12:23:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.807 12:23:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.374 12:23:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.750 [2024-09-30 12:23:36.258718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.750 [2024-09-30 12:23:36.460151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.750 [2024-09-30 12:23:36.460172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.008 [2024-09-30 12:23:36.648909] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.008 [2024-09-30 12:23:36.648974] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.384 spdk_app_start Round 2 00:06:26.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.384 12:23:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.384 12:23:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:26.384 12:23:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58304 /var/tmp/spdk-nbd.sock 00:06:26.384 12:23:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58304 ']' 00:06:26.384 12:23:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.384 12:23:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.384 12:23:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.384 12:23:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.384 12:23:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.384 12:23:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.384 12:23:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:26.384 12:23:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.644 Malloc0 00:06:26.644 12:23:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.903 Malloc1 00:06:26.903 12:23:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.903 12:23:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.163 /dev/nbd0 00:06:27.163 12:23:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.163 12:23:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.163 1+0 records in 00:06:27.163 1+0 records out 00:06:27.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357765 s, 11.4 MB/s 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.163 12:23:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:27.163 12:23:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.163 12:23:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.163 12:23:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.450 /dev/nbd1 00:06:27.450 12:23:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.450 12:23:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.450 1+0 records in 00:06:27.450 1+0 records out 00:06:27.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474221 s, 8.6 MB/s 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.450 12:23:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:27.450 12:23:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.450 12:23:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.450 12:23:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.450 12:23:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.450 12:23:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.744 { 00:06:27.744 "nbd_device": "/dev/nbd0", 00:06:27.744 "bdev_name": "Malloc0" 00:06:27.744 }, 00:06:27.744 { 00:06:27.744 "nbd_device": "/dev/nbd1", 00:06:27.744 "bdev_name": "Malloc1" 00:06:27.744 } 00:06:27.744 ]' 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.744 { 00:06:27.744 "nbd_device": "/dev/nbd0", 00:06:27.744 "bdev_name": "Malloc0" 00:06:27.744 }, 00:06:27.744 { 00:06:27.744 "nbd_device": "/dev/nbd1", 00:06:27.744 "bdev_name": "Malloc1" 00:06:27.744 } 00:06:27.744 ]' 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.744 /dev/nbd1' 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.744 /dev/nbd1' 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.744 256+0 records in 00:06:27.744 256+0 records out 00:06:27.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120556 s, 87.0 MB/s 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.744 256+0 records in 00:06:27.744 256+0 records out 00:06:27.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257346 s, 40.7 MB/s 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.744 256+0 records in 00:06:27.744 256+0 records out 00:06:27.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031567 s, 33.2 MB/s 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.744 12:23:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.745 12:23:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.004 12:23:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.004 12:23:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.004 12:23:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.004 12:23:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.004 12:23:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.004 12:23:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.004 12:23:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.004 12:23:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.004 12:23:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.004 12:23:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.263 12:23:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.263 12:23:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.263 12:23:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.263 12:23:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.263 12:23:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.263 12:23:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.263 12:23:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.263 12:23:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.263 12:23:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.263 12:23:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.263 12:23:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.522 12:23:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.522 12:23:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.781 12:23:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.281 [2024-09-30 12:23:41.889003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.281 [2024-09-30 12:23:42.091646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.281 [2024-09-30 12:23:42.091649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.540 [2024-09-30 12:23:42.281468] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.540 [2024-09-30 12:23:42.281555] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.920 12:23:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58304 /var/tmp/spdk-nbd.sock 00:06:31.920 12:23:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58304 ']' 00:06:31.920 12:23:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.920 12:23:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.920 12:23:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.920 12:23:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.920 12:23:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:32.180 12:23:43 event.app_repeat -- event/event.sh@39 -- # killprocess 58304 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58304 ']' 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58304 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58304 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.180 killing process with pid 58304 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58304' 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58304 00:06:32.180 12:23:43 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58304 00:06:33.119 spdk_app_start is called in Round 0. 00:06:33.119 Shutdown signal received, stop current app iteration 00:06:33.119 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:06:33.119 spdk_app_start is called in Round 1. 00:06:33.119 Shutdown signal received, stop current app iteration 00:06:33.119 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:06:33.119 spdk_app_start is called in Round 2. 00:06:33.119 Shutdown signal received, stop current app iteration 00:06:33.119 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:06:33.119 spdk_app_start is called in Round 3. 00:06:33.119 Shutdown signal received, stop current app iteration 00:06:33.119 12:23:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:33.119 12:23:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:33.119 00:06:33.119 real 0m19.094s 00:06:33.119 user 0m39.783s 00:06:33.119 sys 0m2.680s 00:06:33.119 12:23:45 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.119 12:23:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.119 ************************************ 00:06:33.119 END TEST app_repeat 00:06:33.119 ************************************ 00:06:33.378 12:23:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:33.378 12:23:45 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:33.378 12:23:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.378 12:23:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.378 12:23:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.378 ************************************ 00:06:33.378 START TEST cpu_locks 00:06:33.378 ************************************ 00:06:33.378 12:23:45 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:33.378 * Looking for test storage... 00:06:33.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:33.379 12:23:45 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:33.379 12:23:45 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:33.379 12:23:45 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:33.648 12:23:45 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:33.648 12:23:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:33.649 12:23:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.649 12:23:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:33.649 12:23:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.649 12:23:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:33.649 12:23:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:33.649 12:23:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.649 12:23:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:33.649 12:23:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.649 12:23:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.649 12:23:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.649 12:23:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:33.649 12:23:45 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.649 12:23:45 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:33.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.649 --rc genhtml_branch_coverage=1 00:06:33.649 --rc genhtml_function_coverage=1 00:06:33.649 --rc genhtml_legend=1 00:06:33.649 --rc geninfo_all_blocks=1 00:06:33.649 --rc geninfo_unexecuted_blocks=1 00:06:33.649 00:06:33.649 ' 00:06:33.649 12:23:45 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:33.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.649 --rc genhtml_branch_coverage=1 00:06:33.649 --rc genhtml_function_coverage=1 00:06:33.649 --rc genhtml_legend=1 00:06:33.649 --rc geninfo_all_blocks=1 00:06:33.650 --rc geninfo_unexecuted_blocks=1 00:06:33.650 00:06:33.650 ' 00:06:33.650 12:23:45 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:33.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.650 --rc genhtml_branch_coverage=1 00:06:33.650 --rc genhtml_function_coverage=1 00:06:33.650 --rc genhtml_legend=1 00:06:33.650 --rc geninfo_all_blocks=1 00:06:33.650 --rc geninfo_unexecuted_blocks=1 00:06:33.650 00:06:33.650 ' 00:06:33.650 12:23:45 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:33.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.650 --rc genhtml_branch_coverage=1 00:06:33.650 --rc genhtml_function_coverage=1 00:06:33.650 --rc genhtml_legend=1 00:06:33.650 --rc geninfo_all_blocks=1 00:06:33.650 --rc geninfo_unexecuted_blocks=1 00:06:33.650 00:06:33.650 ' 00:06:33.650 12:23:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:33.650 12:23:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:33.650 12:23:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:33.650 12:23:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:33.650 12:23:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.650 12:23:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.650 12:23:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.650 ************************************ 00:06:33.650 START TEST default_locks 00:06:33.650 ************************************ 00:06:33.650 12:23:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:33.650 12:23:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58746 00:06:33.650 12:23:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58746 00:06:33.650 12:23:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.650 12:23:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58746 ']' 00:06:33.650 12:23:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.650 12:23:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.650 12:23:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.650 12:23:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.650 12:23:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.650 [2024-09-30 12:23:45.424851] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:33.650 [2024-09-30 12:23:45.425027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58746 ] 00:06:33.910 [2024-09-30 12:23:45.593373] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.910 [2024-09-30 12:23:45.798880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.848 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.848 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:34.848 12:23:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58746 00:06:34.848 12:23:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58746 00:06:34.848 12:23:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.107 12:23:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58746 00:06:35.107 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58746 ']' 00:06:35.107 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58746 00:06:35.107 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:35.107 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.107 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58746 00:06:35.107 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.107 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.107 killing process with pid 58746 00:06:35.107 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58746' 00:06:35.107 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58746 00:06:35.107 12:23:46 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58746 00:06:37.644 12:23:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58746 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58746 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58746 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58746 ']' 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.645 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58746) - No such process 00:06:37.645 ERROR: process (pid: 58746) is no longer running 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.645 00:06:37.645 real 0m4.132s 00:06:37.645 user 0m4.017s 00:06:37.645 sys 0m0.620s 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.645 12:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.645 ************************************ 00:06:37.645 END TEST default_locks 00:06:37.645 ************************************ 00:06:37.645 12:23:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:37.645 12:23:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.645 12:23:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.645 12:23:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.645 ************************************ 00:06:37.645 START TEST default_locks_via_rpc 00:06:37.645 ************************************ 00:06:37.645 12:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:37.645 12:23:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58821 00:06:37.645 12:23:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.645 12:23:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58821 00:06:37.645 12:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58821 ']' 00:06:37.645 12:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.645 12:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.645 12:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.645 12:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.645 12:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.904 [2024-09-30 12:23:49.622950] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:37.904 [2024-09-30 12:23:49.623074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58821 ] 00:06:37.904 [2024-09-30 12:23:49.787086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.163 [2024-09-30 12:23:49.988705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58821 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58821 00:06:39.101 12:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.669 12:23:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58821 00:06:39.669 12:23:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58821 ']' 00:06:39.669 12:23:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58821 00:06:39.669 12:23:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:39.669 12:23:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.669 12:23:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58821 00:06:39.669 12:23:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.669 12:23:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.669 killing process with pid 58821 00:06:39.669 12:23:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58821' 00:06:39.669 12:23:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58821 00:06:39.669 12:23:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58821 00:06:42.208 00:06:42.208 real 0m4.190s 00:06:42.208 user 0m4.090s 00:06:42.208 sys 0m0.702s 00:06:42.208 12:23:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.208 12:23:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.208 ************************************ 00:06:42.208 END TEST default_locks_via_rpc 00:06:42.208 ************************************ 00:06:42.208 12:23:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:42.208 12:23:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.208 12:23:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.208 12:23:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.208 ************************************ 00:06:42.208 START TEST non_locking_app_on_locked_coremask 00:06:42.208 ************************************ 00:06:42.208 12:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:42.208 12:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58895 00:06:42.208 12:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.208 12:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58895 /var/tmp/spdk.sock 00:06:42.208 12:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58895 ']' 00:06:42.208 12:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.208 12:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.208 12:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.208 12:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.208 12:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.208 [2024-09-30 12:23:53.887284] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:42.208 [2024-09-30 12:23:53.887420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58895 ] 00:06:42.208 [2024-09-30 12:23:54.049721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.467 [2024-09-30 12:23:54.261357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.404 12:23:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.404 12:23:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:43.404 12:23:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58916 00:06:43.404 12:23:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58916 /var/tmp/spdk2.sock 00:06:43.404 12:23:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:43.404 12:23:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58916 ']' 00:06:43.404 12:23:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.404 12:23:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.404 12:23:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.404 12:23:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.404 12:23:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.404 [2024-09-30 12:23:55.145417] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:43.404 [2024-09-30 12:23:55.145566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58916 ] 00:06:43.404 [2024-09-30 12:23:55.295922] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.404 [2024-09-30 12:23:55.296001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.972 [2024-09-30 12:23:55.698223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.880 12:23:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.880 12:23:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:45.880 12:23:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58895 00:06:45.880 12:23:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58895 00:06:45.880 12:23:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.449 12:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58895 00:06:46.449 12:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58895 ']' 00:06:46.449 12:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58895 00:06:46.449 12:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:46.449 12:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.449 12:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58895 00:06:46.449 12:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.449 12:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.449 killing process with pid 58895 00:06:46.449 12:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58895' 00:06:46.449 12:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58895 00:06:46.449 12:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58895 00:06:51.725 12:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58916 00:06:51.725 12:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58916 ']' 00:06:51.725 12:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58916 00:06:51.725 12:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.725 12:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.725 12:24:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58916 00:06:51.725 12:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.725 12:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.725 killing process with pid 58916 00:06:51.725 12:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58916' 00:06:51.725 12:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58916 00:06:51.725 12:24:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58916 00:06:53.642 00:06:53.642 real 0m11.615s 00:06:53.642 user 0m11.756s 00:06:53.642 sys 0m1.292s 00:06:53.642 12:24:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.642 12:24:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.642 ************************************ 00:06:53.642 END TEST non_locking_app_on_locked_coremask 00:06:53.642 ************************************ 00:06:53.642 12:24:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:53.642 12:24:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.642 12:24:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.642 12:24:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.642 ************************************ 00:06:53.642 START TEST locking_app_on_unlocked_coremask 00:06:53.642 ************************************ 00:06:53.642 12:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:53.642 12:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59068 00:06:53.642 12:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:53.642 12:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59068 /var/tmp/spdk.sock 00:06:53.642 12:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59068 ']' 00:06:53.642 12:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.642 12:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.642 12:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.642 12:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.642 12:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.902 [2024-09-30 12:24:05.571392] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:53.902 [2024-09-30 12:24:05.571538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59068 ] 00:06:53.902 [2024-09-30 12:24:05.733310] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.902 [2024-09-30 12:24:05.733387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.161 [2024-09-30 12:24:05.930912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.101 12:24:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.101 12:24:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:55.101 12:24:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59089 00:06:55.101 12:24:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59089 /var/tmp/spdk2.sock 00:06:55.101 12:24:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59089 ']' 00:06:55.101 12:24:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.101 12:24:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.101 12:24:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.101 12:24:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.101 12:24:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.101 12:24:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:55.101 [2024-09-30 12:24:06.837007] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:55.101 [2024-09-30 12:24:06.837137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59089 ] 00:06:55.101 [2024-09-30 12:24:06.986476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.670 [2024-09-30 12:24:07.386859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.578 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.578 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:57.578 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59089 00:06:57.578 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59089 00:06:57.578 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.845 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59068 00:06:57.845 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59068 ']' 00:06:57.845 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59068 00:06:57.845 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:57.845 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.845 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59068 00:06:58.120 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.120 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.120 killing process with pid 59068 00:06:58.120 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59068' 00:06:58.120 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59068 00:06:58.120 12:24:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59068 00:07:03.408 12:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59089 00:07:03.408 12:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59089 ']' 00:07:03.408 12:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59089 00:07:03.408 12:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:03.408 12:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.408 12:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59089 00:07:03.408 12:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.408 12:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.408 killing process with pid 59089 00:07:03.408 12:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59089' 00:07:03.408 12:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59089 00:07:03.408 12:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59089 00:07:05.318 00:07:05.318 real 0m11.469s 00:07:05.318 user 0m11.564s 00:07:05.318 sys 0m1.249s 00:07:05.318 12:24:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.318 12:24:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.318 ************************************ 00:07:05.318 END TEST locking_app_on_unlocked_coremask 00:07:05.318 ************************************ 00:07:05.318 12:24:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:05.318 12:24:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.318 12:24:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.318 12:24:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.318 ************************************ 00:07:05.318 START TEST locking_app_on_locked_coremask 00:07:05.318 ************************************ 00:07:05.318 12:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:05.318 12:24:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59234 00:07:05.318 12:24:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.318 12:24:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59234 /var/tmp/spdk.sock 00:07:05.318 12:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59234 ']' 00:07:05.318 12:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.318 12:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.318 12:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.318 12:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.318 12:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.318 [2024-09-30 12:24:17.126252] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:05.318 [2024-09-30 12:24:17.126388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59234 ] 00:07:05.578 [2024-09-30 12:24:17.296298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.837 [2024-09-30 12:24:17.501815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59255 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59255 /var/tmp/spdk2.sock 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59255 /var/tmp/spdk2.sock 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59255 /var/tmp/spdk2.sock 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59255 ']' 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.774 12:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.774 [2024-09-30 12:24:18.446430] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:06.774 [2024-09-30 12:24:18.446560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59255 ] 00:07:06.774 [2024-09-30 12:24:18.608179] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59234 has claimed it. 00:07:06.774 [2024-09-30 12:24:18.608255] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.343 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59255) - No such process 00:07:07.343 ERROR: process (pid: 59255) is no longer running 00:07:07.343 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.343 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:07.343 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:07.343 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.343 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.343 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.343 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59234 00:07:07.343 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59234 00:07:07.343 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.911 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59234 00:07:07.911 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59234 ']' 00:07:07.911 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59234 00:07:07.911 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:07.911 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.911 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59234 00:07:07.911 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.911 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.911 killing process with pid 59234 00:07:07.911 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59234' 00:07:07.911 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59234 00:07:07.911 12:24:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59234 00:07:10.452 00:07:10.452 real 0m4.932s 00:07:10.452 user 0m5.063s 00:07:10.452 sys 0m0.854s 00:07:10.452 12:24:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.452 12:24:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.452 ************************************ 00:07:10.452 END TEST locking_app_on_locked_coremask 00:07:10.452 ************************************ 00:07:10.452 12:24:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:10.452 12:24:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.452 12:24:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.452 12:24:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.452 ************************************ 00:07:10.452 START TEST locking_overlapped_coremask 00:07:10.452 ************************************ 00:07:10.452 12:24:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:10.452 12:24:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59327 00:07:10.452 12:24:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:10.452 12:24:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59327 /var/tmp/spdk.sock 00:07:10.452 12:24:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59327 ']' 00:07:10.452 12:24:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.452 12:24:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.452 12:24:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.452 12:24:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.452 12:24:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.452 [2024-09-30 12:24:22.115997] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:10.452 [2024-09-30 12:24:22.116124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59327 ] 00:07:10.452 [2024-09-30 12:24:22.281137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.712 [2024-09-30 12:24:22.487685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.712 [2024-09-30 12:24:22.487837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.712 [2024-09-30 12:24:22.487888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59345 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59345 /var/tmp/spdk2.sock 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59345 /var/tmp/spdk2.sock 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59345 /var/tmp/spdk2.sock 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59345 ']' 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.651 12:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.651 [2024-09-30 12:24:23.416855] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:11.651 [2024-09-30 12:24:23.416963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59345 ] 00:07:11.911 [2024-09-30 12:24:23.570124] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59327 has claimed it. 00:07:11.911 [2024-09-30 12:24:23.570175] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:12.171 ERROR: process (pid: 59345) is no longer running 00:07:12.171 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59345) - No such process 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59327 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59327 ']' 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59327 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.171 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59327 00:07:12.431 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.431 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.431 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59327' 00:07:12.431 killing process with pid 59327 00:07:12.431 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59327 00:07:12.431 12:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59327 00:07:14.972 00:07:14.972 real 0m4.509s 00:07:14.972 user 0m11.833s 00:07:14.972 sys 0m0.586s 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.972 ************************************ 00:07:14.972 END TEST locking_overlapped_coremask 00:07:14.972 ************************************ 00:07:14.972 12:24:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:14.972 12:24:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.972 12:24:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.972 12:24:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.972 ************************************ 00:07:14.972 START TEST locking_overlapped_coremask_via_rpc 00:07:14.972 ************************************ 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59409 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59409 /var/tmp/spdk.sock 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59409 ']' 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.972 12:24:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.972 [2024-09-30 12:24:26.701432] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:14.972 [2024-09-30 12:24:26.701565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59409 ] 00:07:15.232 [2024-09-30 12:24:26.869696] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.232 [2024-09-30 12:24:26.869787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.232 [2024-09-30 12:24:27.075125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.232 [2024-09-30 12:24:27.075178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.232 [2024-09-30 12:24:27.075204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.177 12:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.177 12:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:16.177 12:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59432 00:07:16.177 12:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59432 /var/tmp/spdk2.sock 00:07:16.177 12:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:16.177 12:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59432 ']' 00:07:16.177 12:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.177 12:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.177 12:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.177 12:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.177 12:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.177 [2024-09-30 12:24:27.980846] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:16.177 [2024-09-30 12:24:27.981522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59432 ] 00:07:16.445 [2024-09-30 12:24:28.141286] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.445 [2024-09-30 12:24:28.141328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.704 [2024-09-30 12:24:28.551247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.704 [2024-09-30 12:24:28.554974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.704 [2024-09-30 12:24:28.555041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.609 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.609 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.609 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:18.609 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.609 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.609 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.609 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.609 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:18.609 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.609 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.869 [2024-09-30 12:24:30.509832] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59409 has claimed it. 00:07:18.869 request: 00:07:18.869 { 00:07:18.869 "method": "framework_enable_cpumask_locks", 00:07:18.869 "req_id": 1 00:07:18.869 } 00:07:18.869 Got JSON-RPC error response 00:07:18.869 response: 00:07:18.869 { 00:07:18.869 "code": -32603, 00:07:18.869 "message": "Failed to claim CPU core: 2" 00:07:18.869 } 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59409 /var/tmp/spdk.sock 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59409 ']' 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59432 /var/tmp/spdk2.sock 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59432 ']' 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.869 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.129 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.129 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:19.129 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:19.129 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.129 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.129 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.129 00:07:19.129 real 0m4.341s 00:07:19.129 user 0m1.203s 00:07:19.129 sys 0m0.195s 00:07:19.129 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.129 12:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.129 ************************************ 00:07:19.129 END TEST locking_overlapped_coremask_via_rpc 00:07:19.129 ************************************ 00:07:19.129 12:24:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:19.129 12:24:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59409 ]] 00:07:19.129 12:24:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59409 00:07:19.129 12:24:30 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59409 ']' 00:07:19.129 12:24:30 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59409 00:07:19.129 12:24:30 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:19.129 12:24:30 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.129 12:24:30 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59409 00:07:19.389 12:24:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.389 12:24:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.389 12:24:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59409' 00:07:19.389 killing process with pid 59409 00:07:19.389 12:24:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59409 00:07:19.389 12:24:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59409 00:07:21.930 12:24:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59432 ]] 00:07:21.930 12:24:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59432 00:07:21.930 12:24:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59432 ']' 00:07:21.930 12:24:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59432 00:07:21.930 12:24:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:21.930 12:24:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.930 12:24:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59432 00:07:21.930 12:24:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:21.930 12:24:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:21.930 killing process with pid 59432 00:07:21.930 12:24:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59432' 00:07:21.930 12:24:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59432 00:07:21.930 12:24:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59432 00:07:24.470 12:24:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:24.470 12:24:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:24.470 12:24:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59409 ]] 00:07:24.470 12:24:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59409 00:07:24.470 12:24:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59409 ']' 00:07:24.470 Process with pid 59409 is not found 00:07:24.470 12:24:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59409 00:07:24.470 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59409) - No such process 00:07:24.470 12:24:36 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59409 is not found' 00:07:24.470 12:24:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59432 ]] 00:07:24.470 12:24:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59432 00:07:24.470 12:24:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59432 ']' 00:07:24.470 12:24:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59432 00:07:24.470 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59432) - No such process 00:07:24.470 12:24:36 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59432 is not found' 00:07:24.470 Process with pid 59432 is not found 00:07:24.470 12:24:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:24.470 00:07:24.470 real 0m50.984s 00:07:24.470 user 1m25.038s 00:07:24.470 sys 0m6.771s 00:07:24.470 12:24:36 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.470 12:24:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.470 ************************************ 00:07:24.470 END TEST cpu_locks 00:07:24.470 ************************************ 00:07:24.470 ************************************ 00:07:24.470 END TEST event 00:07:24.470 ************************************ 00:07:24.470 00:07:24.470 real 1m23.609s 00:07:24.470 user 2m29.418s 00:07:24.470 sys 0m10.804s 00:07:24.470 12:24:36 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.470 12:24:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.470 12:24:36 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:24.470 12:24:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.470 12:24:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.470 12:24:36 -- common/autotest_common.sh@10 -- # set +x 00:07:24.470 ************************************ 00:07:24.470 START TEST thread 00:07:24.470 ************************************ 00:07:24.470 12:24:36 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:24.470 * Looking for test storage... 00:07:24.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:24.470 12:24:36 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:24.470 12:24:36 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:24.470 12:24:36 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:24.730 12:24:36 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:24.730 12:24:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.730 12:24:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.730 12:24:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.730 12:24:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.730 12:24:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.730 12:24:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.730 12:24:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.730 12:24:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.730 12:24:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.730 12:24:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.730 12:24:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.730 12:24:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:24.730 12:24:36 thread -- scripts/common.sh@345 -- # : 1 00:07:24.730 12:24:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.730 12:24:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.730 12:24:36 thread -- scripts/common.sh@365 -- # decimal 1 00:07:24.730 12:24:36 thread -- scripts/common.sh@353 -- # local d=1 00:07:24.730 12:24:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.730 12:24:36 thread -- scripts/common.sh@355 -- # echo 1 00:07:24.730 12:24:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.730 12:24:36 thread -- scripts/common.sh@366 -- # decimal 2 00:07:24.730 12:24:36 thread -- scripts/common.sh@353 -- # local d=2 00:07:24.730 12:24:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.730 12:24:36 thread -- scripts/common.sh@355 -- # echo 2 00:07:24.730 12:24:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.730 12:24:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.730 12:24:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.730 12:24:36 thread -- scripts/common.sh@368 -- # return 0 00:07:24.730 12:24:36 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.730 12:24:36 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:24.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.730 --rc genhtml_branch_coverage=1 00:07:24.730 --rc genhtml_function_coverage=1 00:07:24.730 --rc genhtml_legend=1 00:07:24.730 --rc geninfo_all_blocks=1 00:07:24.730 --rc geninfo_unexecuted_blocks=1 00:07:24.730 00:07:24.730 ' 00:07:24.730 12:24:36 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:24.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.730 --rc genhtml_branch_coverage=1 00:07:24.730 --rc genhtml_function_coverage=1 00:07:24.730 --rc genhtml_legend=1 00:07:24.730 --rc geninfo_all_blocks=1 00:07:24.730 --rc geninfo_unexecuted_blocks=1 00:07:24.730 00:07:24.730 ' 00:07:24.730 12:24:36 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:24.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.730 --rc genhtml_branch_coverage=1 00:07:24.730 --rc genhtml_function_coverage=1 00:07:24.730 --rc genhtml_legend=1 00:07:24.730 --rc geninfo_all_blocks=1 00:07:24.730 --rc geninfo_unexecuted_blocks=1 00:07:24.730 00:07:24.730 ' 00:07:24.730 12:24:36 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:24.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.730 --rc genhtml_branch_coverage=1 00:07:24.730 --rc genhtml_function_coverage=1 00:07:24.730 --rc genhtml_legend=1 00:07:24.730 --rc geninfo_all_blocks=1 00:07:24.730 --rc geninfo_unexecuted_blocks=1 00:07:24.730 00:07:24.730 ' 00:07:24.730 12:24:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.730 12:24:36 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:24.730 12:24:36 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.730 12:24:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.730 ************************************ 00:07:24.730 START TEST thread_poller_perf 00:07:24.730 ************************************ 00:07:24.730 12:24:36 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.730 [2024-09-30 12:24:36.468562] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:24.730 [2024-09-30 12:24:36.468776] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59633 ] 00:07:24.990 [2024-09-30 12:24:36.638300] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.990 [2024-09-30 12:24:36.842666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.990 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:26.372 ====================================== 00:07:26.372 busy:2299410014 (cyc) 00:07:26.372 total_run_count: 421000 00:07:26.372 tsc_hz: 2290000000 (cyc) 00:07:26.372 ====================================== 00:07:26.372 poller_cost: 5461 (cyc), 2384 (nsec) 00:07:26.372 00:07:26.372 real 0m1.803s 00:07:26.372 user 0m1.572s 00:07:26.372 sys 0m0.121s 00:07:26.372 12:24:38 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.372 12:24:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.372 ************************************ 00:07:26.372 END TEST thread_poller_perf 00:07:26.372 ************************************ 00:07:26.632 12:24:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:26.632 12:24:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:26.632 12:24:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.632 12:24:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.632 ************************************ 00:07:26.632 START TEST thread_poller_perf 00:07:26.632 ************************************ 00:07:26.632 12:24:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:26.632 [2024-09-30 12:24:38.336490] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:26.632 [2024-09-30 12:24:38.336645] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59664 ] 00:07:26.632 [2024-09-30 12:24:38.498133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.892 [2024-09-30 12:24:38.698621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.892 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:28.273 ====================================== 00:07:28.273 busy:2293906710 (cyc) 00:07:28.273 total_run_count: 5512000 00:07:28.273 tsc_hz: 2290000000 (cyc) 00:07:28.273 ====================================== 00:07:28.273 poller_cost: 416 (cyc), 181 (nsec) 00:07:28.273 00:07:28.273 real 0m1.791s 00:07:28.273 user 0m1.574s 00:07:28.273 sys 0m0.108s 00:07:28.273 12:24:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.273 12:24:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:28.273 ************************************ 00:07:28.273 END TEST thread_poller_perf 00:07:28.273 ************************************ 00:07:28.273 12:24:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:28.273 ************************************ 00:07:28.273 END TEST thread 00:07:28.273 ************************************ 00:07:28.273 00:07:28.273 real 0m3.953s 00:07:28.273 user 0m3.306s 00:07:28.273 sys 0m0.445s 00:07:28.273 12:24:40 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.273 12:24:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.532 12:24:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:28.532 12:24:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:28.532 12:24:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.532 12:24:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.532 12:24:40 -- common/autotest_common.sh@10 -- # set +x 00:07:28.532 ************************************ 00:07:28.532 START TEST app_cmdline 00:07:28.532 ************************************ 00:07:28.532 12:24:40 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:28.532 * Looking for test storage... 00:07:28.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.533 12:24:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:28.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.533 --rc genhtml_branch_coverage=1 00:07:28.533 --rc genhtml_function_coverage=1 00:07:28.533 --rc genhtml_legend=1 00:07:28.533 --rc geninfo_all_blocks=1 00:07:28.533 --rc geninfo_unexecuted_blocks=1 00:07:28.533 00:07:28.533 ' 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:28.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.533 --rc genhtml_branch_coverage=1 00:07:28.533 --rc genhtml_function_coverage=1 00:07:28.533 --rc genhtml_legend=1 00:07:28.533 --rc geninfo_all_blocks=1 00:07:28.533 --rc geninfo_unexecuted_blocks=1 00:07:28.533 00:07:28.533 ' 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:28.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.533 --rc genhtml_branch_coverage=1 00:07:28.533 --rc genhtml_function_coverage=1 00:07:28.533 --rc genhtml_legend=1 00:07:28.533 --rc geninfo_all_blocks=1 00:07:28.533 --rc geninfo_unexecuted_blocks=1 00:07:28.533 00:07:28.533 ' 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:28.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.533 --rc genhtml_branch_coverage=1 00:07:28.533 --rc genhtml_function_coverage=1 00:07:28.533 --rc genhtml_legend=1 00:07:28.533 --rc geninfo_all_blocks=1 00:07:28.533 --rc geninfo_unexecuted_blocks=1 00:07:28.533 00:07:28.533 ' 00:07:28.533 12:24:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:28.533 12:24:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59753 00:07:28.533 12:24:40 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:28.533 12:24:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59753 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59753 ']' 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.533 12:24:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.792 [2024-09-30 12:24:40.523351] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:28.792 [2024-09-30 12:24:40.523574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59753 ] 00:07:28.792 [2024-09-30 12:24:40.687022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.052 [2024-09-30 12:24:40.893860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.990 12:24:41 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.990 12:24:41 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:29.990 12:24:41 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:30.249 { 00:07:30.249 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:07:30.249 "fields": { 00:07:30.249 "major": 25, 00:07:30.249 "minor": 1, 00:07:30.249 "patch": 0, 00:07:30.249 "suffix": "-pre", 00:07:30.249 "commit": "09cc66129" 00:07:30.249 } 00:07:30.249 } 00:07:30.249 12:24:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:30.249 12:24:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:30.249 12:24:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:30.249 12:24:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:30.249 12:24:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.249 12:24:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:30.249 12:24:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.249 12:24:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:30.249 12:24:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:30.249 12:24:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:30.249 12:24:41 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.509 request: 00:07:30.509 { 00:07:30.509 "method": "env_dpdk_get_mem_stats", 00:07:30.509 "req_id": 1 00:07:30.509 } 00:07:30.509 Got JSON-RPC error response 00:07:30.509 response: 00:07:30.509 { 00:07:30.509 "code": -32601, 00:07:30.509 "message": "Method not found" 00:07:30.509 } 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.509 12:24:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59753 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59753 ']' 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59753 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59753 00:07:30.509 killing process with pid 59753 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59753' 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@969 -- # kill 59753 00:07:30.509 12:24:42 app_cmdline -- common/autotest_common.sh@974 -- # wait 59753 00:07:33.048 ************************************ 00:07:33.048 END TEST app_cmdline 00:07:33.048 ************************************ 00:07:33.048 00:07:33.048 real 0m4.510s 00:07:33.048 user 0m4.669s 00:07:33.048 sys 0m0.619s 00:07:33.048 12:24:44 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.048 12:24:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.048 12:24:44 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:33.048 12:24:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.048 12:24:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.048 12:24:44 -- common/autotest_common.sh@10 -- # set +x 00:07:33.048 ************************************ 00:07:33.048 START TEST version 00:07:33.048 ************************************ 00:07:33.048 12:24:44 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:33.048 * Looking for test storage... 00:07:33.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:33.048 12:24:44 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:33.048 12:24:44 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:33.048 12:24:44 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:33.308 12:24:44 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:33.308 12:24:44 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.308 12:24:44 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.308 12:24:44 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.308 12:24:44 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.308 12:24:44 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.308 12:24:44 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.308 12:24:44 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.308 12:24:44 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.308 12:24:44 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.308 12:24:44 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.308 12:24:44 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.308 12:24:44 version -- scripts/common.sh@344 -- # case "$op" in 00:07:33.308 12:24:44 version -- scripts/common.sh@345 -- # : 1 00:07:33.308 12:24:44 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.308 12:24:44 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.308 12:24:44 version -- scripts/common.sh@365 -- # decimal 1 00:07:33.308 12:24:44 version -- scripts/common.sh@353 -- # local d=1 00:07:33.308 12:24:44 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.308 12:24:44 version -- scripts/common.sh@355 -- # echo 1 00:07:33.308 12:24:44 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.308 12:24:44 version -- scripts/common.sh@366 -- # decimal 2 00:07:33.308 12:24:44 version -- scripts/common.sh@353 -- # local d=2 00:07:33.308 12:24:44 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.308 12:24:44 version -- scripts/common.sh@355 -- # echo 2 00:07:33.308 12:24:44 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.308 12:24:44 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.308 12:24:44 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.308 12:24:44 version -- scripts/common.sh@368 -- # return 0 00:07:33.308 12:24:44 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.308 12:24:44 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:33.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.308 --rc genhtml_branch_coverage=1 00:07:33.308 --rc genhtml_function_coverage=1 00:07:33.308 --rc genhtml_legend=1 00:07:33.308 --rc geninfo_all_blocks=1 00:07:33.308 --rc geninfo_unexecuted_blocks=1 00:07:33.308 00:07:33.308 ' 00:07:33.308 12:24:44 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:33.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.308 --rc genhtml_branch_coverage=1 00:07:33.308 --rc genhtml_function_coverage=1 00:07:33.308 --rc genhtml_legend=1 00:07:33.308 --rc geninfo_all_blocks=1 00:07:33.308 --rc geninfo_unexecuted_blocks=1 00:07:33.308 00:07:33.308 ' 00:07:33.308 12:24:45 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:33.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.308 --rc genhtml_branch_coverage=1 00:07:33.308 --rc genhtml_function_coverage=1 00:07:33.308 --rc genhtml_legend=1 00:07:33.308 --rc geninfo_all_blocks=1 00:07:33.308 --rc geninfo_unexecuted_blocks=1 00:07:33.308 00:07:33.308 ' 00:07:33.308 12:24:45 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:33.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.308 --rc genhtml_branch_coverage=1 00:07:33.308 --rc genhtml_function_coverage=1 00:07:33.309 --rc genhtml_legend=1 00:07:33.309 --rc geninfo_all_blocks=1 00:07:33.309 --rc geninfo_unexecuted_blocks=1 00:07:33.309 00:07:33.309 ' 00:07:33.309 12:24:45 version -- app/version.sh@17 -- # get_header_version major 00:07:33.309 12:24:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.309 12:24:45 version -- app/version.sh@14 -- # cut -f2 00:07:33.309 12:24:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.309 12:24:45 version -- app/version.sh@17 -- # major=25 00:07:33.309 12:24:45 version -- app/version.sh@18 -- # get_header_version minor 00:07:33.309 12:24:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.309 12:24:45 version -- app/version.sh@14 -- # cut -f2 00:07:33.309 12:24:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.309 12:24:45 version -- app/version.sh@18 -- # minor=1 00:07:33.309 12:24:45 version -- app/version.sh@19 -- # get_header_version patch 00:07:33.309 12:24:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.309 12:24:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.309 12:24:45 version -- app/version.sh@14 -- # cut -f2 00:07:33.309 12:24:45 version -- app/version.sh@19 -- # patch=0 00:07:33.309 12:24:45 version -- app/version.sh@20 -- # get_header_version suffix 00:07:33.309 12:24:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.309 12:24:45 version -- app/version.sh@14 -- # cut -f2 00:07:33.309 12:24:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.309 12:24:45 version -- app/version.sh@20 -- # suffix=-pre 00:07:33.309 12:24:45 version -- app/version.sh@22 -- # version=25.1 00:07:33.309 12:24:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:33.309 12:24:45 version -- app/version.sh@28 -- # version=25.1rc0 00:07:33.309 12:24:45 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:33.309 12:24:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:33.309 12:24:45 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:33.309 12:24:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:33.309 00:07:33.309 real 0m0.321s 00:07:33.309 user 0m0.188s 00:07:33.309 sys 0m0.181s 00:07:33.309 ************************************ 00:07:33.309 END TEST version 00:07:33.309 ************************************ 00:07:33.309 12:24:45 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.309 12:24:45 version -- common/autotest_common.sh@10 -- # set +x 00:07:33.309 12:24:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:33.309 12:24:45 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:33.309 12:24:45 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:33.309 12:24:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.309 12:24:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.309 12:24:45 -- common/autotest_common.sh@10 -- # set +x 00:07:33.309 ************************************ 00:07:33.309 START TEST bdev_raid 00:07:33.309 ************************************ 00:07:33.309 12:24:45 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:33.569 * Looking for test storage... 00:07:33.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.569 12:24:45 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:33.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.569 --rc genhtml_branch_coverage=1 00:07:33.569 --rc genhtml_function_coverage=1 00:07:33.569 --rc genhtml_legend=1 00:07:33.569 --rc geninfo_all_blocks=1 00:07:33.569 --rc geninfo_unexecuted_blocks=1 00:07:33.569 00:07:33.569 ' 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:33.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.569 --rc genhtml_branch_coverage=1 00:07:33.569 --rc genhtml_function_coverage=1 00:07:33.569 --rc genhtml_legend=1 00:07:33.569 --rc geninfo_all_blocks=1 00:07:33.569 --rc geninfo_unexecuted_blocks=1 00:07:33.569 00:07:33.569 ' 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:33.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.569 --rc genhtml_branch_coverage=1 00:07:33.569 --rc genhtml_function_coverage=1 00:07:33.569 --rc genhtml_legend=1 00:07:33.569 --rc geninfo_all_blocks=1 00:07:33.569 --rc geninfo_unexecuted_blocks=1 00:07:33.569 00:07:33.569 ' 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:33.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.569 --rc genhtml_branch_coverage=1 00:07:33.569 --rc genhtml_function_coverage=1 00:07:33.569 --rc genhtml_legend=1 00:07:33.569 --rc geninfo_all_blocks=1 00:07:33.569 --rc geninfo_unexecuted_blocks=1 00:07:33.569 00:07:33.569 ' 00:07:33.569 12:24:45 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:33.569 12:24:45 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:33.569 12:24:45 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:33.569 12:24:45 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:33.569 12:24:45 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:33.569 12:24:45 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:33.569 12:24:45 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.569 12:24:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.569 ************************************ 00:07:33.569 START TEST raid1_resize_data_offset_test 00:07:33.569 ************************************ 00:07:33.569 12:24:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:07:33.569 12:24:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59946 00:07:33.569 12:24:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59946' 00:07:33.569 12:24:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:33.569 Process raid pid: 59946 00:07:33.569 12:24:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59946 00:07:33.569 12:24:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 59946 ']' 00:07:33.569 12:24:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.569 12:24:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.569 12:24:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.569 12:24:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.569 12:24:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.828 [2024-09-30 12:24:45.496574] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:33.828 [2024-09-30 12:24:45.496813] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.828 [2024-09-30 12:24:45.664202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.088 [2024-09-30 12:24:45.867648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.347 [2024-09-30 12:24:46.062880] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.347 [2024-09-30 12:24:46.063002] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.606 malloc0 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.606 malloc1 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.606 null0 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.606 [2024-09-30 12:24:46.490448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:34.606 [2024-09-30 12:24:46.492258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:34.606 [2024-09-30 12:24:46.492391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:34.606 [2024-09-30 12:24:46.492569] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:34.606 [2024-09-30 12:24:46.492584] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:34.606 [2024-09-30 12:24:46.492878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:34.606 [2024-09-30 12:24:46.493060] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:34.606 [2024-09-30 12:24:46.493074] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:34.606 [2024-09-30 12:24:46.493235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.606 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.607 12:24:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.607 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.607 12:24:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:34.607 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.866 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.866 12:24:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:34.866 12:24:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:34.866 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.866 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.866 [2024-09-30 12:24:46.554302] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:34.866 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.866 12:24:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:34.866 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.866 12:24:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.435 malloc2 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.435 [2024-09-30 12:24:47.087733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:35.435 [2024-09-30 12:24:47.104487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.435 [2024-09-30 12:24:47.106366] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59946 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 59946 ']' 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 59946 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59946 00:07:35.435 killing process with pid 59946 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59946' 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 59946 00:07:35.435 12:24:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 59946 00:07:35.435 [2024-09-30 12:24:47.194331] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.435 [2024-09-30 12:24:47.195408] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:35.435 [2024-09-30 12:24:47.195568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.435 [2024-09-30 12:24:47.195598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:35.435 [2024-09-30 12:24:47.222287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.435 [2024-09-30 12:24:47.222685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.435 [2024-09-30 12:24:47.222715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:37.355 [2024-09-30 12:24:48.922500] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.295 12:24:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:38.295 00:07:38.295 real 0m4.722s 00:07:38.295 user 0m4.611s 00:07:38.295 sys 0m0.540s 00:07:38.295 ************************************ 00:07:38.295 END TEST raid1_resize_data_offset_test 00:07:38.295 ************************************ 00:07:38.295 12:24:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.295 12:24:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.295 12:24:50 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:38.296 12:24:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:38.296 12:24:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.296 12:24:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.555 ************************************ 00:07:38.555 START TEST raid0_resize_superblock_test 00:07:38.555 ************************************ 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:38.555 Process raid pid: 60031 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60031 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60031' 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60031 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60031 ']' 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.555 12:24:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.555 [2024-09-30 12:24:50.286960] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:38.555 [2024-09-30 12:24:50.287092] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.815 [2024-09-30 12:24:50.451892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.815 [2024-09-30 12:24:50.650078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.074 [2024-09-30 12:24:50.841214] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.074 [2024-09-30 12:24:50.841325] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.334 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.334 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:39.334 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:39.334 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.334 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.904 malloc0 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.904 [2024-09-30 12:24:51.620526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:39.904 [2024-09-30 12:24:51.620656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.904 [2024-09-30 12:24:51.620685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:39.904 [2024-09-30 12:24:51.620699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.904 [2024-09-30 12:24:51.622884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.904 [2024-09-30 12:24:51.622932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:39.904 pt0 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.904 0f302907-2e8f-4eea-a1a5-a53503a9ed4d 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.904 c82c21eb-e91d-45d3-b60e-3d7e2476c730 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.904 4ad51ce5-9e56-4466-a020-1cbf3e6cc60a 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.904 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.904 [2024-09-30 12:24:51.756014] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev c82c21eb-e91d-45d3-b60e-3d7e2476c730 is claimed 00:07:39.904 [2024-09-30 12:24:51.756179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4ad51ce5-9e56-4466-a020-1cbf3e6cc60a is claimed 00:07:39.904 [2024-09-30 12:24:51.756331] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:39.904 [2024-09-30 12:24:51.756348] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:39.905 [2024-09-30 12:24:51.756593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:39.905 [2024-09-30 12:24:51.756802] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:39.905 [2024-09-30 12:24:51.756815] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:39.905 [2024-09-30 12:24:51.756984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.905 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.905 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:39.905 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:39.905 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.905 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.905 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:40.165 [2024-09-30 12:24:51.864091] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.165 [2024-09-30 12:24:51.916034] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:40.165 [2024-09-30 12:24:51.916062] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c82c21eb-e91d-45d3-b60e-3d7e2476c730' was resized: old size 131072, new size 204800 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.165 [2024-09-30 12:24:51.928006] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:40.165 [2024-09-30 12:24:51.928032] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4ad51ce5-9e56-4466-a020-1cbf3e6cc60a' was resized: old size 131072, new size 204800 00:07:40.165 [2024-09-30 12:24:51.928065] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.165 12:24:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.165 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.165 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:40.165 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:40.165 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:40.165 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:40.166 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:40.166 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.166 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.166 [2024-09-30 12:24:52.035894] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.166 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.425 [2024-09-30 12:24:52.071642] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:40.425 [2024-09-30 12:24:52.071774] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:40.425 [2024-09-30 12:24:52.071810] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.425 [2024-09-30 12:24:52.071859] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:40.425 [2024-09-30 12:24:52.071970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.425 [2024-09-30 12:24:52.072038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.425 [2024-09-30 12:24:52.072088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.425 [2024-09-30 12:24:52.083589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:40.425 [2024-09-30 12:24:52.083711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.425 [2024-09-30 12:24:52.083763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:40.425 [2024-09-30 12:24:52.083814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.425 [2024-09-30 12:24:52.085896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.425 [2024-09-30 12:24:52.085988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:40.425 [2024-09-30 12:24:52.087679] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c82c21eb-e91d-45d3-b60e-3d7e2476c730 00:07:40.425 [2024-09-30 12:24:52.087849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev c82c21eb-e91d-45d3-b60e-3d7e2476c730 is claimed 00:07:40.425 [2024-09-30 12:24:52.088026] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4ad51ce5-9e56-4466-a020-1cbf3e6cc60a 00:07:40.425 [2024-09-30 12:24:52.088098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4ad51ce5-9e56-4466-a020-1cbf3e6cc60a is claimed 00:07:40.425 [2024-09-30 12:24:52.088306] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4ad51ce5-9e56-4466-a020-1cbf3e6cc60a (2) smaller than existing raid bdev Raid (3) 00:07:40.425 [2024-09-30 12:24:52.088381] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev c82c21eb-e91d-45d3-b60e-3d7e2476c730: File exists 00:07:40.425 [2024-09-30 12:24:52.088470] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:40.425 [2024-09-30 12:24:52.088509] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:40.425 pt0 00:07:40.425 [2024-09-30 12:24:52.088793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:40.425 [2024-09-30 12:24:52.088970] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:40.425 [2024-09-30 12:24:52.089029] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:40.425 [2024-09-30 12:24:52.089199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.425 [2024-09-30 12:24:52.111958] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60031 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60031 ']' 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60031 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60031 00:07:40.425 killing process with pid 60031 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60031' 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60031 00:07:40.425 [2024-09-30 12:24:52.176993] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.425 [2024-09-30 12:24:52.177049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.425 [2024-09-30 12:24:52.177085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.425 [2024-09-30 12:24:52.177093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:40.425 12:24:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60031 00:07:41.803 [2024-09-30 12:24:53.506357] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.180 12:24:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:43.180 00:07:43.180 real 0m4.514s 00:07:43.180 user 0m4.678s 00:07:43.180 sys 0m0.553s 00:07:43.180 12:24:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.180 ************************************ 00:07:43.180 END TEST raid0_resize_superblock_test 00:07:43.180 ************************************ 00:07:43.180 12:24:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.180 12:24:54 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:43.180 12:24:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:43.180 12:24:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.180 12:24:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.180 ************************************ 00:07:43.180 START TEST raid1_resize_superblock_test 00:07:43.180 ************************************ 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:43.180 Process raid pid: 60128 00:07:43.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60128 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60128' 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60128 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60128 ']' 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.180 12:24:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.180 [2024-09-30 12:24:54.876302] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:43.180 [2024-09-30 12:24:54.876502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.180 [2024-09-30 12:24:55.038020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.440 [2024-09-30 12:24:55.231623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.699 [2024-09-30 12:24:55.423102] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.699 [2024-09-30 12:24:55.423247] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.958 12:24:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.958 12:24:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:43.958 12:24:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:43.958 12:24:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.958 12:24:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.528 malloc0 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.528 [2024-09-30 12:24:56.199046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:44.528 [2024-09-30 12:24:56.199184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.528 [2024-09-30 12:24:56.199231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:44.528 [2024-09-30 12:24:56.199271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.528 [2024-09-30 12:24:56.201396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.528 [2024-09-30 12:24:56.201491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:44.528 pt0 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.528 7c1248d6-b41c-41b5-92ca-0d5d41973bfb 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.528 f6999ec8-c662-42da-946b-025317554474 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.528 c7c41fa3-7fca-42e6-8c05-95277ec20326 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.528 [2024-09-30 12:24:56.331663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f6999ec8-c662-42da-946b-025317554474 is claimed 00:07:44.528 [2024-09-30 12:24:56.331770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev c7c41fa3-7fca-42e6-8c05-95277ec20326 is claimed 00:07:44.528 [2024-09-30 12:24:56.331943] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:44.528 [2024-09-30 12:24:56.331962] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:44.528 [2024-09-30 12:24:56.332199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:44.528 [2024-09-30 12:24:56.332417] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:44.528 [2024-09-30 12:24:56.332429] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:44.528 [2024-09-30 12:24:56.332582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.528 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:44.788 [2024-09-30 12:24:56.447665] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.788 [2024-09-30 12:24:56.495525] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:44.788 [2024-09-30 12:24:56.495553] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f6999ec8-c662-42da-946b-025317554474' was resized: old size 131072, new size 204800 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.788 [2024-09-30 12:24:56.507478] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:44.788 [2024-09-30 12:24:56.507512] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c7c41fa3-7fca-42e6-8c05-95277ec20326' was resized: old size 131072, new size 204800 00:07:44.788 [2024-09-30 12:24:56.507542] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:44.788 [2024-09-30 12:24:56.619360] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.788 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.788 [2024-09-30 12:24:56.667080] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:44.788 [2024-09-30 12:24:56.667149] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:44.789 [2024-09-30 12:24:56.667185] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:44.789 [2024-09-30 12:24:56.667326] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.789 [2024-09-30 12:24:56.667492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.789 [2024-09-30 12:24:56.667567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.789 [2024-09-30 12:24:56.667585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:44.789 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.789 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:44.789 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.789 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.789 [2024-09-30 12:24:56.675038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:44.789 [2024-09-30 12:24:56.675100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.789 [2024-09-30 12:24:56.675121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:44.789 [2024-09-30 12:24:56.675134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.789 [2024-09-30 12:24:56.677311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.789 [2024-09-30 12:24:56.677372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:44.789 [2024-09-30 12:24:56.679043] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f6999ec8-c662-42da-946b-025317554474 00:07:44.789 [2024-09-30 12:24:56.679110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f6999ec8-c662-42da-946b-025317554474 is claimed 00:07:44.789 [2024-09-30 12:24:56.679233] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c7c41fa3-7fca-42e6-8c05-95277ec20326 00:07:44.789 [2024-09-30 12:24:56.679254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev c7c41fa3-7fca-42e6-8c05-95277ec20326 is claimed 00:07:44.789 [2024-09-30 12:24:56.679418] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c7c41fa3-7fca-42e6-8c05-95277ec20326 (2) smaller than existing raid bdev Raid (3) 00:07:44.789 [2024-09-30 12:24:56.679442] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f6999ec8-c662-42da-946b-025317554474: File exists 00:07:44.789 [2024-09-30 12:24:56.679482] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:44.789 [2024-09-30 12:24:56.679494] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:44.789 pt0 00:07:44.789 [2024-09-30 12:24:56.679779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:44.789 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.789 [2024-09-30 12:24:56.679964] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:44.789 [2024-09-30 12:24:56.679974] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:44.789 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:44.789 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.789 [2024-09-30 12:24:56.680144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.789 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.049 [2024-09-30 12:24:56.695325] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60128 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60128 ']' 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60128 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60128 00:07:45.049 killing process with pid 60128 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60128' 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60128 00:07:45.049 [2024-09-30 12:24:56.772074] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.049 [2024-09-30 12:24:56.772135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.049 [2024-09-30 12:24:56.772178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.049 [2024-09-30 12:24:56.772187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:45.049 12:24:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60128 00:07:46.431 [2024-09-30 12:24:58.116554] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.814 12:24:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:47.814 00:07:47.814 real 0m4.517s 00:07:47.814 user 0m4.679s 00:07:47.814 sys 0m0.579s 00:07:47.814 12:24:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.814 ************************************ 00:07:47.814 END TEST raid1_resize_superblock_test 00:07:47.814 ************************************ 00:07:47.814 12:24:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.814 12:24:59 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:47.814 12:24:59 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:47.814 12:24:59 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:47.814 12:24:59 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:47.814 12:24:59 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:47.814 12:24:59 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:47.814 12:24:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:47.814 12:24:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.814 12:24:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.814 ************************************ 00:07:47.814 START TEST raid_function_test_raid0 00:07:47.814 ************************************ 00:07:47.814 12:24:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:47.814 12:24:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:47.814 12:24:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:47.814 12:24:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:47.814 12:24:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60231 00:07:47.815 12:24:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:47.815 12:24:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60231' 00:07:47.815 Process raid pid: 60231 00:07:47.815 12:24:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60231 00:07:47.815 12:24:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60231 ']' 00:07:47.815 12:24:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.815 12:24:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.815 12:24:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.815 12:24:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.815 12:24:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:47.815 [2024-09-30 12:24:59.483372] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:47.815 [2024-09-30 12:24:59.483561] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.815 [2024-09-30 12:24:59.648940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.075 [2024-09-30 12:24:59.848261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.334 [2024-09-30 12:25:00.052321] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.334 [2024-09-30 12:25:00.052433] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:48.593 Base_1 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:48.593 Base_2 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:48.593 [2024-09-30 12:25:00.444296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:48.593 [2024-09-30 12:25:00.446083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:48.593 [2024-09-30 12:25:00.446163] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:48.593 [2024-09-30 12:25:00.446176] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:48.593 [2024-09-30 12:25:00.446428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:48.593 [2024-09-30 12:25:00.446568] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:48.593 [2024-09-30 12:25:00.446578] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:48.593 [2024-09-30 12:25:00.446753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:48.593 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.853 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:48.853 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:48.853 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:48.853 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:48.853 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:48.854 [2024-09-30 12:25:00.679974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:48.854 /dev/nbd0 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.854 1+0 records in 00:07:48.854 1+0 records out 00:07:48.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024932 s, 16.4 MB/s 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:48.854 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:49.113 { 00:07:49.113 "nbd_device": "/dev/nbd0", 00:07:49.113 "bdev_name": "raid" 00:07:49.113 } 00:07:49.113 ]' 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:49.113 { 00:07:49.113 "nbd_device": "/dev/nbd0", 00:07:49.113 "bdev_name": "raid" 00:07:49.113 } 00:07:49.113 ]' 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:49.113 12:25:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:49.113 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:49.113 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:49.113 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:49.113 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:49.113 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:49.113 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:49.113 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:49.113 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:49.113 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:49.373 4096+0 records in 00:07:49.373 4096+0 records out 00:07:49.373 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0326955 s, 64.1 MB/s 00:07:49.373 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:49.373 4096+0 records in 00:07:49.373 4096+0 records out 00:07:49.373 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.217956 s, 9.6 MB/s 00:07:49.373 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:49.633 128+0 records in 00:07:49.633 128+0 records out 00:07:49.633 65536 bytes (66 kB, 64 KiB) copied, 0.00117847 s, 55.6 MB/s 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:49.633 2035+0 records in 00:07:49.633 2035+0 records out 00:07:49.633 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0151712 s, 68.7 MB/s 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:49.633 456+0 records in 00:07:49.633 456+0 records out 00:07:49.633 233472 bytes (233 kB, 228 KiB) copied, 0.00294743 s, 79.2 MB/s 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.633 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:49.894 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:49.894 [2024-09-30 12:25:01.581372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.894 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:49.894 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:49.894 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.894 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.894 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:49.894 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:49.894 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.894 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:49.894 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:49.894 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60231 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60231 ']' 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60231 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60231 00:07:50.154 killing process with pid 60231 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60231' 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60231 00:07:50.154 [2024-09-30 12:25:01.879051] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.154 [2024-09-30 12:25:01.879167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.154 12:25:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60231 00:07:50.154 [2024-09-30 12:25:01.879219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.154 [2024-09-30 12:25:01.879236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:50.414 [2024-09-30 12:25:02.085754] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.796 12:25:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:51.796 00:07:51.796 real 0m3.879s 00:07:51.796 user 0m4.388s 00:07:51.796 sys 0m0.983s 00:07:51.796 12:25:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.796 12:25:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:51.796 ************************************ 00:07:51.796 END TEST raid_function_test_raid0 00:07:51.796 ************************************ 00:07:51.796 12:25:03 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:51.796 12:25:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:51.796 12:25:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.796 12:25:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.796 ************************************ 00:07:51.796 START TEST raid_function_test_concat 00:07:51.796 ************************************ 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60360 00:07:51.796 Process raid pid: 60360 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60360' 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60360 00:07:51.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60360 ']' 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.796 12:25:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:51.796 [2024-09-30 12:25:03.441982] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:51.796 [2024-09-30 12:25:03.442650] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.796 [2024-09-30 12:25:03.611359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.057 [2024-09-30 12:25:03.811101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.317 [2024-09-30 12:25:03.992861] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.317 [2024-09-30 12:25:03.992991] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:52.577 Base_1 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:52.577 Base_2 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:52.577 [2024-09-30 12:25:04.369288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:52.577 [2024-09-30 12:25:04.371211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:52.577 [2024-09-30 12:25:04.371338] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:52.577 [2024-09-30 12:25:04.371382] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:52.577 [2024-09-30 12:25:04.371692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.577 [2024-09-30 12:25:04.371914] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:52.577 [2024-09-30 12:25:04.371963] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:52.577 [2024-09-30 12:25:04.372129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.577 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:52.578 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:52.578 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:52.578 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:52.578 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:52.578 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:52.578 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:52.578 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:52.578 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:52.578 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:52.578 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:52.578 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:52.847 [2024-09-30 12:25:04.600934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:52.847 /dev/nbd0 00:07:52.847 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:52.847 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:52.847 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:52.847 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:52.847 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.847 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.847 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:52.847 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:52.847 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.847 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.847 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.847 1+0 records in 00:07:52.847 1+0 records out 00:07:52.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344109 s, 11.9 MB/s 00:07:52.848 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.848 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:52.848 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.848 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.848 12:25:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:52.848 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.848 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:52.848 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:52.848 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:52.848 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:53.122 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:53.122 { 00:07:53.122 "nbd_device": "/dev/nbd0", 00:07:53.122 "bdev_name": "raid" 00:07:53.123 } 00:07:53.123 ]' 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:53.123 { 00:07:53.123 "nbd_device": "/dev/nbd0", 00:07:53.123 "bdev_name": "raid" 00:07:53.123 } 00:07:53.123 ]' 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:53.123 4096+0 records in 00:07:53.123 4096+0 records out 00:07:53.123 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.032571 s, 64.4 MB/s 00:07:53.123 12:25:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:53.383 4096+0 records in 00:07:53.383 4096+0 records out 00:07:53.383 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.202029 s, 10.4 MB/s 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:53.383 128+0 records in 00:07:53.383 128+0 records out 00:07:53.383 65536 bytes (66 kB, 64 KiB) copied, 0.00124965 s, 52.4 MB/s 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:53.383 2035+0 records in 00:07:53.383 2035+0 records out 00:07:53.383 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0121919 s, 85.5 MB/s 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:53.383 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:53.643 456+0 records in 00:07:53.643 456+0 records out 00:07:53.643 233472 bytes (233 kB, 228 KiB) copied, 0.00366975 s, 63.6 MB/s 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.643 [2024-09-30 12:25:05.515715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:53.643 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60360 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60360 ']' 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60360 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:53.903 12:25:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.904 12:25:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60360 00:07:54.163 killing process with pid 60360 00:07:54.163 12:25:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.163 12:25:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.163 12:25:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60360' 00:07:54.163 12:25:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60360 00:07:54.163 [2024-09-30 12:25:05.800217] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.163 [2024-09-30 12:25:05.800332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.163 12:25:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60360 00:07:54.163 [2024-09-30 12:25:05.800384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.163 [2024-09-30 12:25:05.800399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:54.163 [2024-09-30 12:25:05.990560] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.544 12:25:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:55.544 00:07:55.544 real 0m3.828s 00:07:55.544 user 0m4.313s 00:07:55.544 sys 0m0.971s 00:07:55.544 12:25:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.544 12:25:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:55.544 ************************************ 00:07:55.544 END TEST raid_function_test_concat 00:07:55.544 ************************************ 00:07:55.544 12:25:07 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:55.544 12:25:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:55.544 12:25:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.544 12:25:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.544 ************************************ 00:07:55.544 START TEST raid0_resize_test 00:07:55.544 ************************************ 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60483 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60483' 00:07:55.544 Process raid pid: 60483 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60483 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60483 ']' 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.544 12:25:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.545 [2024-09-30 12:25:07.337706] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:55.545 [2024-09-30 12:25:07.337898] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.806 [2024-09-30 12:25:07.499809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.066 [2024-09-30 12:25:07.706815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.066 [2024-09-30 12:25:07.889537] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.066 [2024-09-30 12:25:07.889672] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.326 Base_1 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.326 Base_2 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.326 [2024-09-30 12:25:08.184013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:56.326 [2024-09-30 12:25:08.185729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:56.326 [2024-09-30 12:25:08.185805] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:56.326 [2024-09-30 12:25:08.185819] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:56.326 [2024-09-30 12:25:08.186070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:56.326 [2024-09-30 12:25:08.186206] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:56.326 [2024-09-30 12:25:08.186219] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:56.326 [2024-09-30 12:25:08.186360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.326 [2024-09-30 12:25:08.195940] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:56.326 [2024-09-30 12:25:08.195970] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:56.326 true 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.326 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.326 [2024-09-30 12:25:08.212054] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.586 [2024-09-30 12:25:08.255859] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:56.586 [2024-09-30 12:25:08.255940] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:56.586 [2024-09-30 12:25:08.256027] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:56.586 true 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.586 [2024-09-30 12:25:08.271999] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60483 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60483 ']' 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60483 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60483 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60483' 00:07:56.586 killing process with pid 60483 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60483 00:07:56.586 [2024-09-30 12:25:08.355257] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.586 [2024-09-30 12:25:08.355399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.586 12:25:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60483 00:07:56.586 [2024-09-30 12:25:08.355474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.586 [2024-09-30 12:25:08.355495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:56.586 [2024-09-30 12:25:08.372122] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.965 12:25:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:57.965 00:07:57.965 real 0m2.308s 00:07:57.965 user 0m2.399s 00:07:57.965 sys 0m0.364s 00:07:57.965 12:25:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.965 ************************************ 00:07:57.965 END TEST raid0_resize_test 00:07:57.965 ************************************ 00:07:57.965 12:25:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.965 12:25:09 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:57.965 12:25:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:57.965 12:25:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.965 12:25:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.965 ************************************ 00:07:57.965 START TEST raid1_resize_test 00:07:57.965 ************************************ 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60544 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60544' 00:07:57.965 Process raid pid: 60544 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60544 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60544 ']' 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.965 12:25:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.965 [2024-09-30 12:25:09.717304] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:57.965 [2024-09-30 12:25:09.717529] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.225 [2024-09-30 12:25:09.881397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.225 [2024-09-30 12:25:10.070001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.485 [2024-09-30 12:25:10.252883] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.485 [2024-09-30 12:25:10.253016] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.745 Base_1 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.745 Base_2 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.745 [2024-09-30 12:25:10.567871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:58.745 [2024-09-30 12:25:10.569725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:58.745 [2024-09-30 12:25:10.569864] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:58.745 [2024-09-30 12:25:10.569929] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:58.745 [2024-09-30 12:25:10.570182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:58.745 [2024-09-30 12:25:10.570374] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:58.745 [2024-09-30 12:25:10.570422] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:58.745 [2024-09-30 12:25:10.570606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.745 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.746 [2024-09-30 12:25:10.575790] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:58.746 [2024-09-30 12:25:10.575872] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:58.746 true 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:58.746 [2024-09-30 12:25:10.587899] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.746 [2024-09-30 12:25:10.635666] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:58.746 [2024-09-30 12:25:10.635782] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:58.746 [2024-09-30 12:25:10.635852] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:58.746 true 00:07:58.746 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:59.006 [2024-09-30 12:25:10.647801] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60544 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60544 ']' 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60544 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60544 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60544' 00:07:59.006 killing process with pid 60544 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60544 00:07:59.006 [2024-09-30 12:25:10.733936] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.006 [2024-09-30 12:25:10.734075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.006 12:25:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60544 00:07:59.006 [2024-09-30 12:25:10.734550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.006 [2024-09-30 12:25:10.734639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:59.006 [2024-09-30 12:25:10.750551] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.387 12:25:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:00.387 00:08:00.387 real 0m2.307s 00:08:00.387 user 0m2.404s 00:08:00.387 sys 0m0.346s 00:08:00.387 12:25:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.387 12:25:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.387 ************************************ 00:08:00.387 END TEST raid1_resize_test 00:08:00.387 ************************************ 00:08:00.387 12:25:11 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:00.387 12:25:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:00.387 12:25:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:00.387 12:25:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:00.387 12:25:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.387 12:25:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.387 ************************************ 00:08:00.388 START TEST raid_state_function_test 00:08:00.388 ************************************ 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60601 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60601' 00:08:00.388 Process raid pid: 60601 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60601 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60601 ']' 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.388 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.388 [2024-09-30 12:25:12.113089] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:00.388 [2024-09-30 12:25:12.113323] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.388 [2024-09-30 12:25:12.268984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.648 [2024-09-30 12:25:12.463263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.907 [2024-09-30 12:25:12.657099] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.907 [2024-09-30 12:25:12.657213] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.195 [2024-09-30 12:25:12.922981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.195 [2024-09-30 12:25:12.923050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.195 [2024-09-30 12:25:12.923061] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.195 [2024-09-30 12:25:12.923073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.195 "name": "Existed_Raid", 00:08:01.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.195 "strip_size_kb": 64, 00:08:01.195 "state": "configuring", 00:08:01.195 "raid_level": "raid0", 00:08:01.195 "superblock": false, 00:08:01.195 "num_base_bdevs": 2, 00:08:01.195 "num_base_bdevs_discovered": 0, 00:08:01.195 "num_base_bdevs_operational": 2, 00:08:01.195 "base_bdevs_list": [ 00:08:01.195 { 00:08:01.195 "name": "BaseBdev1", 00:08:01.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.195 "is_configured": false, 00:08:01.195 "data_offset": 0, 00:08:01.195 "data_size": 0 00:08:01.195 }, 00:08:01.195 { 00:08:01.195 "name": "BaseBdev2", 00:08:01.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.195 "is_configured": false, 00:08:01.195 "data_offset": 0, 00:08:01.195 "data_size": 0 00:08:01.195 } 00:08:01.195 ] 00:08:01.195 }' 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.195 12:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.456 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.456 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.456 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.456 [2024-09-30 12:25:13.330209] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.456 [2024-09-30 12:25:13.330350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:01.456 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.456 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.456 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.456 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.456 [2024-09-30 12:25:13.342207] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.456 [2024-09-30 12:25:13.342322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.456 [2024-09-30 12:25:13.342355] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.456 [2024-09-30 12:25:13.342384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.456 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.456 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.456 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.456 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 [2024-09-30 12:25:13.420361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.716 BaseBdev1 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 [ 00:08:01.716 { 00:08:01.716 "name": "BaseBdev1", 00:08:01.716 "aliases": [ 00:08:01.716 "2b28d8c8-6763-4be9-b08a-83e49e8613b4" 00:08:01.716 ], 00:08:01.716 "product_name": "Malloc disk", 00:08:01.716 "block_size": 512, 00:08:01.716 "num_blocks": 65536, 00:08:01.716 "uuid": "2b28d8c8-6763-4be9-b08a-83e49e8613b4", 00:08:01.716 "assigned_rate_limits": { 00:08:01.716 "rw_ios_per_sec": 0, 00:08:01.716 "rw_mbytes_per_sec": 0, 00:08:01.716 "r_mbytes_per_sec": 0, 00:08:01.716 "w_mbytes_per_sec": 0 00:08:01.716 }, 00:08:01.716 "claimed": true, 00:08:01.716 "claim_type": "exclusive_write", 00:08:01.716 "zoned": false, 00:08:01.716 "supported_io_types": { 00:08:01.716 "read": true, 00:08:01.716 "write": true, 00:08:01.716 "unmap": true, 00:08:01.716 "flush": true, 00:08:01.716 "reset": true, 00:08:01.716 "nvme_admin": false, 00:08:01.716 "nvme_io": false, 00:08:01.716 "nvme_io_md": false, 00:08:01.716 "write_zeroes": true, 00:08:01.716 "zcopy": true, 00:08:01.716 "get_zone_info": false, 00:08:01.716 "zone_management": false, 00:08:01.716 "zone_append": false, 00:08:01.716 "compare": false, 00:08:01.716 "compare_and_write": false, 00:08:01.716 "abort": true, 00:08:01.716 "seek_hole": false, 00:08:01.716 "seek_data": false, 00:08:01.716 "copy": true, 00:08:01.716 "nvme_iov_md": false 00:08:01.716 }, 00:08:01.716 "memory_domains": [ 00:08:01.716 { 00:08:01.716 "dma_device_id": "system", 00:08:01.716 "dma_device_type": 1 00:08:01.716 }, 00:08:01.716 { 00:08:01.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.716 "dma_device_type": 2 00:08:01.716 } 00:08:01.716 ], 00:08:01.716 "driver_specific": {} 00:08:01.716 } 00:08:01.716 ] 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.716 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.717 "name": "Existed_Raid", 00:08:01.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.717 "strip_size_kb": 64, 00:08:01.717 "state": "configuring", 00:08:01.717 "raid_level": "raid0", 00:08:01.717 "superblock": false, 00:08:01.717 "num_base_bdevs": 2, 00:08:01.717 "num_base_bdevs_discovered": 1, 00:08:01.717 "num_base_bdevs_operational": 2, 00:08:01.717 "base_bdevs_list": [ 00:08:01.717 { 00:08:01.717 "name": "BaseBdev1", 00:08:01.717 "uuid": "2b28d8c8-6763-4be9-b08a-83e49e8613b4", 00:08:01.717 "is_configured": true, 00:08:01.717 "data_offset": 0, 00:08:01.717 "data_size": 65536 00:08:01.717 }, 00:08:01.717 { 00:08:01.717 "name": "BaseBdev2", 00:08:01.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.717 "is_configured": false, 00:08:01.717 "data_offset": 0, 00:08:01.717 "data_size": 0 00:08:01.717 } 00:08:01.717 ] 00:08:01.717 }' 00:08:01.717 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.717 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.285 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.286 [2024-09-30 12:25:13.907585] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.286 [2024-09-30 12:25:13.907646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.286 [2024-09-30 12:25:13.919629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.286 [2024-09-30 12:25:13.921453] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.286 [2024-09-30 12:25:13.921548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.286 "name": "Existed_Raid", 00:08:02.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.286 "strip_size_kb": 64, 00:08:02.286 "state": "configuring", 00:08:02.286 "raid_level": "raid0", 00:08:02.286 "superblock": false, 00:08:02.286 "num_base_bdevs": 2, 00:08:02.286 "num_base_bdevs_discovered": 1, 00:08:02.286 "num_base_bdevs_operational": 2, 00:08:02.286 "base_bdevs_list": [ 00:08:02.286 { 00:08:02.286 "name": "BaseBdev1", 00:08:02.286 "uuid": "2b28d8c8-6763-4be9-b08a-83e49e8613b4", 00:08:02.286 "is_configured": true, 00:08:02.286 "data_offset": 0, 00:08:02.286 "data_size": 65536 00:08:02.286 }, 00:08:02.286 { 00:08:02.286 "name": "BaseBdev2", 00:08:02.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.286 "is_configured": false, 00:08:02.286 "data_offset": 0, 00:08:02.286 "data_size": 0 00:08:02.286 } 00:08:02.286 ] 00:08:02.286 }' 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.286 12:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.545 [2024-09-30 12:25:14.357550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.545 [2024-09-30 12:25:14.357692] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:02.545 [2024-09-30 12:25:14.357721] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:02.545 [2024-09-30 12:25:14.358090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:02.545 [2024-09-30 12:25:14.358310] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:02.545 [2024-09-30 12:25:14.358368] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:02.545 [2024-09-30 12:25:14.358681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.545 BaseBdev2 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.545 [ 00:08:02.545 { 00:08:02.545 "name": "BaseBdev2", 00:08:02.545 "aliases": [ 00:08:02.545 "37298f87-ce76-4d04-9b84-734385bf3b22" 00:08:02.545 ], 00:08:02.545 "product_name": "Malloc disk", 00:08:02.545 "block_size": 512, 00:08:02.545 "num_blocks": 65536, 00:08:02.545 "uuid": "37298f87-ce76-4d04-9b84-734385bf3b22", 00:08:02.545 "assigned_rate_limits": { 00:08:02.545 "rw_ios_per_sec": 0, 00:08:02.545 "rw_mbytes_per_sec": 0, 00:08:02.545 "r_mbytes_per_sec": 0, 00:08:02.545 "w_mbytes_per_sec": 0 00:08:02.545 }, 00:08:02.545 "claimed": true, 00:08:02.545 "claim_type": "exclusive_write", 00:08:02.545 "zoned": false, 00:08:02.545 "supported_io_types": { 00:08:02.545 "read": true, 00:08:02.545 "write": true, 00:08:02.545 "unmap": true, 00:08:02.545 "flush": true, 00:08:02.545 "reset": true, 00:08:02.545 "nvme_admin": false, 00:08:02.545 "nvme_io": false, 00:08:02.545 "nvme_io_md": false, 00:08:02.545 "write_zeroes": true, 00:08:02.545 "zcopy": true, 00:08:02.545 "get_zone_info": false, 00:08:02.545 "zone_management": false, 00:08:02.545 "zone_append": false, 00:08:02.545 "compare": false, 00:08:02.545 "compare_and_write": false, 00:08:02.545 "abort": true, 00:08:02.545 "seek_hole": false, 00:08:02.545 "seek_data": false, 00:08:02.545 "copy": true, 00:08:02.545 "nvme_iov_md": false 00:08:02.545 }, 00:08:02.545 "memory_domains": [ 00:08:02.545 { 00:08:02.545 "dma_device_id": "system", 00:08:02.545 "dma_device_type": 1 00:08:02.545 }, 00:08:02.545 { 00:08:02.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.545 "dma_device_type": 2 00:08:02.545 } 00:08:02.545 ], 00:08:02.545 "driver_specific": {} 00:08:02.545 } 00:08:02.545 ] 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.545 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.546 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.546 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.546 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.546 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.546 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.546 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.546 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.546 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.546 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.546 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.546 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.805 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.805 "name": "Existed_Raid", 00:08:02.805 "uuid": "bba3a294-a5b6-47aa-821e-a362b6589a77", 00:08:02.805 "strip_size_kb": 64, 00:08:02.805 "state": "online", 00:08:02.805 "raid_level": "raid0", 00:08:02.805 "superblock": false, 00:08:02.805 "num_base_bdevs": 2, 00:08:02.805 "num_base_bdevs_discovered": 2, 00:08:02.805 "num_base_bdevs_operational": 2, 00:08:02.805 "base_bdevs_list": [ 00:08:02.805 { 00:08:02.805 "name": "BaseBdev1", 00:08:02.805 "uuid": "2b28d8c8-6763-4be9-b08a-83e49e8613b4", 00:08:02.805 "is_configured": true, 00:08:02.805 "data_offset": 0, 00:08:02.805 "data_size": 65536 00:08:02.805 }, 00:08:02.805 { 00:08:02.805 "name": "BaseBdev2", 00:08:02.805 "uuid": "37298f87-ce76-4d04-9b84-734385bf3b22", 00:08:02.805 "is_configured": true, 00:08:02.805 "data_offset": 0, 00:08:02.805 "data_size": 65536 00:08:02.805 } 00:08:02.805 ] 00:08:02.805 }' 00:08:02.805 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.805 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.064 [2024-09-30 12:25:14.873000] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.064 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.064 "name": "Existed_Raid", 00:08:03.064 "aliases": [ 00:08:03.064 "bba3a294-a5b6-47aa-821e-a362b6589a77" 00:08:03.064 ], 00:08:03.064 "product_name": "Raid Volume", 00:08:03.064 "block_size": 512, 00:08:03.064 "num_blocks": 131072, 00:08:03.064 "uuid": "bba3a294-a5b6-47aa-821e-a362b6589a77", 00:08:03.064 "assigned_rate_limits": { 00:08:03.064 "rw_ios_per_sec": 0, 00:08:03.064 "rw_mbytes_per_sec": 0, 00:08:03.064 "r_mbytes_per_sec": 0, 00:08:03.064 "w_mbytes_per_sec": 0 00:08:03.064 }, 00:08:03.064 "claimed": false, 00:08:03.064 "zoned": false, 00:08:03.064 "supported_io_types": { 00:08:03.064 "read": true, 00:08:03.064 "write": true, 00:08:03.064 "unmap": true, 00:08:03.064 "flush": true, 00:08:03.064 "reset": true, 00:08:03.064 "nvme_admin": false, 00:08:03.064 "nvme_io": false, 00:08:03.064 "nvme_io_md": false, 00:08:03.064 "write_zeroes": true, 00:08:03.064 "zcopy": false, 00:08:03.064 "get_zone_info": false, 00:08:03.064 "zone_management": false, 00:08:03.064 "zone_append": false, 00:08:03.064 "compare": false, 00:08:03.064 "compare_and_write": false, 00:08:03.064 "abort": false, 00:08:03.064 "seek_hole": false, 00:08:03.064 "seek_data": false, 00:08:03.064 "copy": false, 00:08:03.064 "nvme_iov_md": false 00:08:03.064 }, 00:08:03.064 "memory_domains": [ 00:08:03.064 { 00:08:03.064 "dma_device_id": "system", 00:08:03.064 "dma_device_type": 1 00:08:03.064 }, 00:08:03.064 { 00:08:03.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.064 "dma_device_type": 2 00:08:03.064 }, 00:08:03.064 { 00:08:03.065 "dma_device_id": "system", 00:08:03.065 "dma_device_type": 1 00:08:03.065 }, 00:08:03.065 { 00:08:03.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.065 "dma_device_type": 2 00:08:03.065 } 00:08:03.065 ], 00:08:03.065 "driver_specific": { 00:08:03.065 "raid": { 00:08:03.065 "uuid": "bba3a294-a5b6-47aa-821e-a362b6589a77", 00:08:03.065 "strip_size_kb": 64, 00:08:03.065 "state": "online", 00:08:03.065 "raid_level": "raid0", 00:08:03.065 "superblock": false, 00:08:03.065 "num_base_bdevs": 2, 00:08:03.065 "num_base_bdevs_discovered": 2, 00:08:03.065 "num_base_bdevs_operational": 2, 00:08:03.065 "base_bdevs_list": [ 00:08:03.065 { 00:08:03.065 "name": "BaseBdev1", 00:08:03.065 "uuid": "2b28d8c8-6763-4be9-b08a-83e49e8613b4", 00:08:03.065 "is_configured": true, 00:08:03.065 "data_offset": 0, 00:08:03.065 "data_size": 65536 00:08:03.065 }, 00:08:03.065 { 00:08:03.065 "name": "BaseBdev2", 00:08:03.065 "uuid": "37298f87-ce76-4d04-9b84-734385bf3b22", 00:08:03.065 "is_configured": true, 00:08:03.065 "data_offset": 0, 00:08:03.065 "data_size": 65536 00:08:03.065 } 00:08:03.065 ] 00:08:03.065 } 00:08:03.065 } 00:08:03.065 }' 00:08:03.065 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.323 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:03.323 BaseBdev2' 00:08:03.323 12:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.323 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.324 [2024-09-30 12:25:15.120345] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:03.324 [2024-09-30 12:25:15.120384] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.324 [2024-09-30 12:25:15.120442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.324 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.584 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.584 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.584 "name": "Existed_Raid", 00:08:03.584 "uuid": "bba3a294-a5b6-47aa-821e-a362b6589a77", 00:08:03.584 "strip_size_kb": 64, 00:08:03.584 "state": "offline", 00:08:03.584 "raid_level": "raid0", 00:08:03.584 "superblock": false, 00:08:03.584 "num_base_bdevs": 2, 00:08:03.584 "num_base_bdevs_discovered": 1, 00:08:03.584 "num_base_bdevs_operational": 1, 00:08:03.584 "base_bdevs_list": [ 00:08:03.584 { 00:08:03.584 "name": null, 00:08:03.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.584 "is_configured": false, 00:08:03.584 "data_offset": 0, 00:08:03.584 "data_size": 65536 00:08:03.584 }, 00:08:03.584 { 00:08:03.584 "name": "BaseBdev2", 00:08:03.584 "uuid": "37298f87-ce76-4d04-9b84-734385bf3b22", 00:08:03.584 "is_configured": true, 00:08:03.584 "data_offset": 0, 00:08:03.584 "data_size": 65536 00:08:03.584 } 00:08:03.584 ] 00:08:03.584 }' 00:08:03.584 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.584 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.844 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.844 [2024-09-30 12:25:15.678406] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:03.844 [2024-09-30 12:25:15.678549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60601 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60601 ']' 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60601 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60601 00:08:04.105 killing process with pid 60601 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60601' 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60601 00:08:04.105 [2024-09-30 12:25:15.853084] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.105 12:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60601 00:08:04.105 [2024-09-30 12:25:15.869263] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.485 ************************************ 00:08:05.485 END TEST raid_state_function_test 00:08:05.485 ************************************ 00:08:05.485 12:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:05.485 00:08:05.485 real 0m5.031s 00:08:05.485 user 0m7.183s 00:08:05.485 sys 0m0.799s 00:08:05.485 12:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.485 12:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.485 12:25:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:05.485 12:25:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:05.485 12:25:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.485 12:25:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:05.485 ************************************ 00:08:05.485 START TEST raid_state_function_test_sb 00:08:05.485 ************************************ 00:08:05.485 12:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:08:05.485 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:05.485 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:05.485 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:05.485 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:05.485 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:05.485 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:05.485 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60854 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60854' 00:08:05.486 Process raid pid: 60854 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60854 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 60854 ']' 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.486 12:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.486 [2024-09-30 12:25:17.208343] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:05.486 [2024-09-30 12:25:17.208553] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.486 [2024-09-30 12:25:17.371426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.746 [2024-09-30 12:25:17.567571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.005 [2024-09-30 12:25:17.768909] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.005 [2024-09-30 12:25:17.769037] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.265 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.265 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.266 [2024-09-30 12:25:18.023898] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.266 [2024-09-30 12:25:18.024026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.266 [2024-09-30 12:25:18.024061] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.266 [2024-09-30 12:25:18.024091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.266 "name": "Existed_Raid", 00:08:06.266 "uuid": "0935cedf-0231-423a-92ee-f9caedcdc250", 00:08:06.266 "strip_size_kb": 64, 00:08:06.266 "state": "configuring", 00:08:06.266 "raid_level": "raid0", 00:08:06.266 "superblock": true, 00:08:06.266 "num_base_bdevs": 2, 00:08:06.266 "num_base_bdevs_discovered": 0, 00:08:06.266 "num_base_bdevs_operational": 2, 00:08:06.266 "base_bdevs_list": [ 00:08:06.266 { 00:08:06.266 "name": "BaseBdev1", 00:08:06.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.266 "is_configured": false, 00:08:06.266 "data_offset": 0, 00:08:06.266 "data_size": 0 00:08:06.266 }, 00:08:06.266 { 00:08:06.266 "name": "BaseBdev2", 00:08:06.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.266 "is_configured": false, 00:08:06.266 "data_offset": 0, 00:08:06.266 "data_size": 0 00:08:06.266 } 00:08:06.266 ] 00:08:06.266 }' 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.266 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.836 [2024-09-30 12:25:18.467020] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.836 [2024-09-30 12:25:18.467065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.836 [2024-09-30 12:25:18.475031] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.836 [2024-09-30 12:25:18.475150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.836 [2024-09-30 12:25:18.475184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.836 [2024-09-30 12:25:18.475215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.836 [2024-09-30 12:25:18.546735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.836 BaseBdev1 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.836 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.836 [ 00:08:06.836 { 00:08:06.836 "name": "BaseBdev1", 00:08:06.836 "aliases": [ 00:08:06.836 "497087a4-ff7d-4520-8b5b-2900627c6617" 00:08:06.836 ], 00:08:06.836 "product_name": "Malloc disk", 00:08:06.836 "block_size": 512, 00:08:06.836 "num_blocks": 65536, 00:08:06.837 "uuid": "497087a4-ff7d-4520-8b5b-2900627c6617", 00:08:06.837 "assigned_rate_limits": { 00:08:06.837 "rw_ios_per_sec": 0, 00:08:06.837 "rw_mbytes_per_sec": 0, 00:08:06.837 "r_mbytes_per_sec": 0, 00:08:06.837 "w_mbytes_per_sec": 0 00:08:06.837 }, 00:08:06.837 "claimed": true, 00:08:06.837 "claim_type": "exclusive_write", 00:08:06.837 "zoned": false, 00:08:06.837 "supported_io_types": { 00:08:06.837 "read": true, 00:08:06.837 "write": true, 00:08:06.837 "unmap": true, 00:08:06.837 "flush": true, 00:08:06.837 "reset": true, 00:08:06.837 "nvme_admin": false, 00:08:06.837 "nvme_io": false, 00:08:06.837 "nvme_io_md": false, 00:08:06.837 "write_zeroes": true, 00:08:06.837 "zcopy": true, 00:08:06.837 "get_zone_info": false, 00:08:06.837 "zone_management": false, 00:08:06.837 "zone_append": false, 00:08:06.837 "compare": false, 00:08:06.837 "compare_and_write": false, 00:08:06.837 "abort": true, 00:08:06.837 "seek_hole": false, 00:08:06.837 "seek_data": false, 00:08:06.837 "copy": true, 00:08:06.837 "nvme_iov_md": false 00:08:06.837 }, 00:08:06.837 "memory_domains": [ 00:08:06.837 { 00:08:06.837 "dma_device_id": "system", 00:08:06.837 "dma_device_type": 1 00:08:06.837 }, 00:08:06.837 { 00:08:06.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.837 "dma_device_type": 2 00:08:06.837 } 00:08:06.837 ], 00:08:06.837 "driver_specific": {} 00:08:06.837 } 00:08:06.837 ] 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.837 "name": "Existed_Raid", 00:08:06.837 "uuid": "d9918323-fe75-42f1-b140-99f1995189a2", 00:08:06.837 "strip_size_kb": 64, 00:08:06.837 "state": "configuring", 00:08:06.837 "raid_level": "raid0", 00:08:06.837 "superblock": true, 00:08:06.837 "num_base_bdevs": 2, 00:08:06.837 "num_base_bdevs_discovered": 1, 00:08:06.837 "num_base_bdevs_operational": 2, 00:08:06.837 "base_bdevs_list": [ 00:08:06.837 { 00:08:06.837 "name": "BaseBdev1", 00:08:06.837 "uuid": "497087a4-ff7d-4520-8b5b-2900627c6617", 00:08:06.837 "is_configured": true, 00:08:06.837 "data_offset": 2048, 00:08:06.837 "data_size": 63488 00:08:06.837 }, 00:08:06.837 { 00:08:06.837 "name": "BaseBdev2", 00:08:06.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.837 "is_configured": false, 00:08:06.837 "data_offset": 0, 00:08:06.837 "data_size": 0 00:08:06.837 } 00:08:06.837 ] 00:08:06.837 }' 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.837 12:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.405 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:07.405 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.405 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.405 [2024-09-30 12:25:19.021916] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:07.405 [2024-09-30 12:25:19.021971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:07.405 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.406 [2024-09-30 12:25:19.029964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.406 [2024-09-30 12:25:19.031857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.406 [2024-09-30 12:25:19.031947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.406 "name": "Existed_Raid", 00:08:07.406 "uuid": "0b245259-4526-41a7-8360-fa960cdab81e", 00:08:07.406 "strip_size_kb": 64, 00:08:07.406 "state": "configuring", 00:08:07.406 "raid_level": "raid0", 00:08:07.406 "superblock": true, 00:08:07.406 "num_base_bdevs": 2, 00:08:07.406 "num_base_bdevs_discovered": 1, 00:08:07.406 "num_base_bdevs_operational": 2, 00:08:07.406 "base_bdevs_list": [ 00:08:07.406 { 00:08:07.406 "name": "BaseBdev1", 00:08:07.406 "uuid": "497087a4-ff7d-4520-8b5b-2900627c6617", 00:08:07.406 "is_configured": true, 00:08:07.406 "data_offset": 2048, 00:08:07.406 "data_size": 63488 00:08:07.406 }, 00:08:07.406 { 00:08:07.406 "name": "BaseBdev2", 00:08:07.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.406 "is_configured": false, 00:08:07.406 "data_offset": 0, 00:08:07.406 "data_size": 0 00:08:07.406 } 00:08:07.406 ] 00:08:07.406 }' 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.406 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.665 [2024-09-30 12:25:19.429266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.665 [2024-09-30 12:25:19.429662] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.665 [2024-09-30 12:25:19.429684] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:07.665 [2024-09-30 12:25:19.429981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:07.665 BaseBdev2 00:08:07.665 [2024-09-30 12:25:19.430152] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.665 [2024-09-30 12:25:19.430173] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:07.665 [2024-09-30 12:25:19.430332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.665 [ 00:08:07.665 { 00:08:07.665 "name": "BaseBdev2", 00:08:07.665 "aliases": [ 00:08:07.665 "7d0fd122-17b0-4dd4-8ed3-1235ec2b7539" 00:08:07.665 ], 00:08:07.665 "product_name": "Malloc disk", 00:08:07.665 "block_size": 512, 00:08:07.665 "num_blocks": 65536, 00:08:07.665 "uuid": "7d0fd122-17b0-4dd4-8ed3-1235ec2b7539", 00:08:07.665 "assigned_rate_limits": { 00:08:07.665 "rw_ios_per_sec": 0, 00:08:07.665 "rw_mbytes_per_sec": 0, 00:08:07.665 "r_mbytes_per_sec": 0, 00:08:07.665 "w_mbytes_per_sec": 0 00:08:07.665 }, 00:08:07.665 "claimed": true, 00:08:07.665 "claim_type": "exclusive_write", 00:08:07.665 "zoned": false, 00:08:07.665 "supported_io_types": { 00:08:07.665 "read": true, 00:08:07.665 "write": true, 00:08:07.665 "unmap": true, 00:08:07.665 "flush": true, 00:08:07.665 "reset": true, 00:08:07.665 "nvme_admin": false, 00:08:07.665 "nvme_io": false, 00:08:07.665 "nvme_io_md": false, 00:08:07.665 "write_zeroes": true, 00:08:07.665 "zcopy": true, 00:08:07.665 "get_zone_info": false, 00:08:07.665 "zone_management": false, 00:08:07.665 "zone_append": false, 00:08:07.665 "compare": false, 00:08:07.665 "compare_and_write": false, 00:08:07.665 "abort": true, 00:08:07.665 "seek_hole": false, 00:08:07.665 "seek_data": false, 00:08:07.665 "copy": true, 00:08:07.665 "nvme_iov_md": false 00:08:07.665 }, 00:08:07.665 "memory_domains": [ 00:08:07.665 { 00:08:07.665 "dma_device_id": "system", 00:08:07.665 "dma_device_type": 1 00:08:07.665 }, 00:08:07.665 { 00:08:07.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.665 "dma_device_type": 2 00:08:07.665 } 00:08:07.665 ], 00:08:07.665 "driver_specific": {} 00:08:07.665 } 00:08:07.665 ] 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.665 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.666 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.666 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.666 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.666 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.666 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.666 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.666 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.666 "name": "Existed_Raid", 00:08:07.666 "uuid": "0b245259-4526-41a7-8360-fa960cdab81e", 00:08:07.666 "strip_size_kb": 64, 00:08:07.666 "state": "online", 00:08:07.666 "raid_level": "raid0", 00:08:07.666 "superblock": true, 00:08:07.666 "num_base_bdevs": 2, 00:08:07.666 "num_base_bdevs_discovered": 2, 00:08:07.666 "num_base_bdevs_operational": 2, 00:08:07.666 "base_bdevs_list": [ 00:08:07.666 { 00:08:07.666 "name": "BaseBdev1", 00:08:07.666 "uuid": "497087a4-ff7d-4520-8b5b-2900627c6617", 00:08:07.666 "is_configured": true, 00:08:07.666 "data_offset": 2048, 00:08:07.666 "data_size": 63488 00:08:07.666 }, 00:08:07.666 { 00:08:07.666 "name": "BaseBdev2", 00:08:07.666 "uuid": "7d0fd122-17b0-4dd4-8ed3-1235ec2b7539", 00:08:07.666 "is_configured": true, 00:08:07.666 "data_offset": 2048, 00:08:07.666 "data_size": 63488 00:08:07.666 } 00:08:07.666 ] 00:08:07.666 }' 00:08:07.666 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.666 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.233 [2024-09-30 12:25:19.864862] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.233 "name": "Existed_Raid", 00:08:08.233 "aliases": [ 00:08:08.233 "0b245259-4526-41a7-8360-fa960cdab81e" 00:08:08.233 ], 00:08:08.233 "product_name": "Raid Volume", 00:08:08.233 "block_size": 512, 00:08:08.233 "num_blocks": 126976, 00:08:08.233 "uuid": "0b245259-4526-41a7-8360-fa960cdab81e", 00:08:08.233 "assigned_rate_limits": { 00:08:08.233 "rw_ios_per_sec": 0, 00:08:08.233 "rw_mbytes_per_sec": 0, 00:08:08.233 "r_mbytes_per_sec": 0, 00:08:08.233 "w_mbytes_per_sec": 0 00:08:08.233 }, 00:08:08.233 "claimed": false, 00:08:08.233 "zoned": false, 00:08:08.233 "supported_io_types": { 00:08:08.233 "read": true, 00:08:08.233 "write": true, 00:08:08.233 "unmap": true, 00:08:08.233 "flush": true, 00:08:08.233 "reset": true, 00:08:08.233 "nvme_admin": false, 00:08:08.233 "nvme_io": false, 00:08:08.233 "nvme_io_md": false, 00:08:08.233 "write_zeroes": true, 00:08:08.233 "zcopy": false, 00:08:08.233 "get_zone_info": false, 00:08:08.233 "zone_management": false, 00:08:08.233 "zone_append": false, 00:08:08.233 "compare": false, 00:08:08.233 "compare_and_write": false, 00:08:08.233 "abort": false, 00:08:08.233 "seek_hole": false, 00:08:08.233 "seek_data": false, 00:08:08.233 "copy": false, 00:08:08.233 "nvme_iov_md": false 00:08:08.233 }, 00:08:08.233 "memory_domains": [ 00:08:08.233 { 00:08:08.233 "dma_device_id": "system", 00:08:08.233 "dma_device_type": 1 00:08:08.233 }, 00:08:08.233 { 00:08:08.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.233 "dma_device_type": 2 00:08:08.233 }, 00:08:08.233 { 00:08:08.233 "dma_device_id": "system", 00:08:08.233 "dma_device_type": 1 00:08:08.233 }, 00:08:08.233 { 00:08:08.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.233 "dma_device_type": 2 00:08:08.233 } 00:08:08.233 ], 00:08:08.233 "driver_specific": { 00:08:08.233 "raid": { 00:08:08.233 "uuid": "0b245259-4526-41a7-8360-fa960cdab81e", 00:08:08.233 "strip_size_kb": 64, 00:08:08.233 "state": "online", 00:08:08.233 "raid_level": "raid0", 00:08:08.233 "superblock": true, 00:08:08.233 "num_base_bdevs": 2, 00:08:08.233 "num_base_bdevs_discovered": 2, 00:08:08.233 "num_base_bdevs_operational": 2, 00:08:08.233 "base_bdevs_list": [ 00:08:08.233 { 00:08:08.233 "name": "BaseBdev1", 00:08:08.233 "uuid": "497087a4-ff7d-4520-8b5b-2900627c6617", 00:08:08.233 "is_configured": true, 00:08:08.233 "data_offset": 2048, 00:08:08.233 "data_size": 63488 00:08:08.233 }, 00:08:08.233 { 00:08:08.233 "name": "BaseBdev2", 00:08:08.233 "uuid": "7d0fd122-17b0-4dd4-8ed3-1235ec2b7539", 00:08:08.233 "is_configured": true, 00:08:08.233 "data_offset": 2048, 00:08:08.233 "data_size": 63488 00:08:08.233 } 00:08:08.233 ] 00:08:08.233 } 00:08:08.233 } 00:08:08.233 }' 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:08.233 BaseBdev2' 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.233 12:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.233 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.233 [2024-09-30 12:25:20.072272] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:08.233 [2024-09-30 12:25:20.072368] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.233 [2024-09-30 12:25:20.072442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.493 "name": "Existed_Raid", 00:08:08.493 "uuid": "0b245259-4526-41a7-8360-fa960cdab81e", 00:08:08.493 "strip_size_kb": 64, 00:08:08.493 "state": "offline", 00:08:08.493 "raid_level": "raid0", 00:08:08.493 "superblock": true, 00:08:08.493 "num_base_bdevs": 2, 00:08:08.493 "num_base_bdevs_discovered": 1, 00:08:08.493 "num_base_bdevs_operational": 1, 00:08:08.493 "base_bdevs_list": [ 00:08:08.493 { 00:08:08.493 "name": null, 00:08:08.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.493 "is_configured": false, 00:08:08.493 "data_offset": 0, 00:08:08.493 "data_size": 63488 00:08:08.493 }, 00:08:08.493 { 00:08:08.493 "name": "BaseBdev2", 00:08:08.493 "uuid": "7d0fd122-17b0-4dd4-8ed3-1235ec2b7539", 00:08:08.493 "is_configured": true, 00:08:08.493 "data_offset": 2048, 00:08:08.493 "data_size": 63488 00:08:08.493 } 00:08:08.493 ] 00:08:08.493 }' 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.493 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.752 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:08.752 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.752 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.752 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.752 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.753 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.753 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.753 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.753 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.753 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:08.753 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.753 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.753 [2024-09-30 12:25:20.619635] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.753 [2024-09-30 12:25:20.619788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60854 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 60854 ']' 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 60854 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60854 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60854' 00:08:09.012 killing process with pid 60854 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 60854 00:08:09.012 [2024-09-30 12:25:20.797128] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.012 12:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 60854 00:08:09.012 [2024-09-30 12:25:20.812977] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.403 12:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:10.403 00:08:10.403 real 0m4.897s 00:08:10.403 user 0m6.941s 00:08:10.403 sys 0m0.785s 00:08:10.403 12:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.403 12:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.403 ************************************ 00:08:10.403 END TEST raid_state_function_test_sb 00:08:10.403 ************************************ 00:08:10.403 12:25:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:10.403 12:25:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:10.403 12:25:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.403 12:25:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.403 ************************************ 00:08:10.403 START TEST raid_superblock_test 00:08:10.403 ************************************ 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61101 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61101 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61101 ']' 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.403 12:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.403 [2024-09-30 12:25:22.179387] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:10.403 [2024-09-30 12:25:22.179621] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61101 ] 00:08:10.662 [2024-09-30 12:25:22.346490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.662 [2024-09-30 12:25:22.545769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.920 [2024-09-30 12:25:22.735823] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.921 [2024-09-30 12:25:22.735912] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.180 12:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.180 malloc1 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.180 [2024-09-30 12:25:23.029091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:11.180 [2024-09-30 12:25:23.029269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.180 [2024-09-30 12:25:23.029315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:11.180 [2024-09-30 12:25:23.029374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.180 [2024-09-30 12:25:23.031472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.180 [2024-09-30 12:25:23.031557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:11.180 pt1 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:11.180 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.181 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.440 malloc2 00:08:11.440 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.440 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:11.440 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.440 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.440 [2024-09-30 12:25:23.112250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:11.440 [2024-09-30 12:25:23.112377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.440 [2024-09-30 12:25:23.112423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:11.440 [2024-09-30 12:25:23.112480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.441 [2024-09-30 12:25:23.114555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.441 [2024-09-30 12:25:23.114638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:11.441 pt2 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.441 [2024-09-30 12:25:23.124300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:11.441 [2024-09-30 12:25:23.126167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:11.441 [2024-09-30 12:25:23.126325] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:11.441 [2024-09-30 12:25:23.126339] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:11.441 [2024-09-30 12:25:23.126561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:11.441 [2024-09-30 12:25:23.126702] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:11.441 [2024-09-30 12:25:23.126714] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:11.441 [2024-09-30 12:25:23.126898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.441 "name": "raid_bdev1", 00:08:11.441 "uuid": "648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da", 00:08:11.441 "strip_size_kb": 64, 00:08:11.441 "state": "online", 00:08:11.441 "raid_level": "raid0", 00:08:11.441 "superblock": true, 00:08:11.441 "num_base_bdevs": 2, 00:08:11.441 "num_base_bdevs_discovered": 2, 00:08:11.441 "num_base_bdevs_operational": 2, 00:08:11.441 "base_bdevs_list": [ 00:08:11.441 { 00:08:11.441 "name": "pt1", 00:08:11.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.441 "is_configured": true, 00:08:11.441 "data_offset": 2048, 00:08:11.441 "data_size": 63488 00:08:11.441 }, 00:08:11.441 { 00:08:11.441 "name": "pt2", 00:08:11.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.441 "is_configured": true, 00:08:11.441 "data_offset": 2048, 00:08:11.441 "data_size": 63488 00:08:11.441 } 00:08:11.441 ] 00:08:11.441 }' 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.441 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.701 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:11.701 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:11.701 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.701 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.701 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.701 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.701 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.701 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.701 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.701 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.701 [2024-09-30 12:25:23.571753] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.701 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.961 "name": "raid_bdev1", 00:08:11.961 "aliases": [ 00:08:11.961 "648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da" 00:08:11.961 ], 00:08:11.961 "product_name": "Raid Volume", 00:08:11.961 "block_size": 512, 00:08:11.961 "num_blocks": 126976, 00:08:11.961 "uuid": "648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da", 00:08:11.961 "assigned_rate_limits": { 00:08:11.961 "rw_ios_per_sec": 0, 00:08:11.961 "rw_mbytes_per_sec": 0, 00:08:11.961 "r_mbytes_per_sec": 0, 00:08:11.961 "w_mbytes_per_sec": 0 00:08:11.961 }, 00:08:11.961 "claimed": false, 00:08:11.961 "zoned": false, 00:08:11.961 "supported_io_types": { 00:08:11.961 "read": true, 00:08:11.961 "write": true, 00:08:11.961 "unmap": true, 00:08:11.961 "flush": true, 00:08:11.961 "reset": true, 00:08:11.961 "nvme_admin": false, 00:08:11.961 "nvme_io": false, 00:08:11.961 "nvme_io_md": false, 00:08:11.961 "write_zeroes": true, 00:08:11.961 "zcopy": false, 00:08:11.961 "get_zone_info": false, 00:08:11.961 "zone_management": false, 00:08:11.961 "zone_append": false, 00:08:11.961 "compare": false, 00:08:11.961 "compare_and_write": false, 00:08:11.961 "abort": false, 00:08:11.961 "seek_hole": false, 00:08:11.961 "seek_data": false, 00:08:11.961 "copy": false, 00:08:11.961 "nvme_iov_md": false 00:08:11.961 }, 00:08:11.961 "memory_domains": [ 00:08:11.961 { 00:08:11.961 "dma_device_id": "system", 00:08:11.961 "dma_device_type": 1 00:08:11.961 }, 00:08:11.961 { 00:08:11.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.961 "dma_device_type": 2 00:08:11.961 }, 00:08:11.961 { 00:08:11.961 "dma_device_id": "system", 00:08:11.961 "dma_device_type": 1 00:08:11.961 }, 00:08:11.961 { 00:08:11.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.961 "dma_device_type": 2 00:08:11.961 } 00:08:11.961 ], 00:08:11.961 "driver_specific": { 00:08:11.961 "raid": { 00:08:11.961 "uuid": "648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da", 00:08:11.961 "strip_size_kb": 64, 00:08:11.961 "state": "online", 00:08:11.961 "raid_level": "raid0", 00:08:11.961 "superblock": true, 00:08:11.961 "num_base_bdevs": 2, 00:08:11.961 "num_base_bdevs_discovered": 2, 00:08:11.961 "num_base_bdevs_operational": 2, 00:08:11.961 "base_bdevs_list": [ 00:08:11.961 { 00:08:11.961 "name": "pt1", 00:08:11.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.961 "is_configured": true, 00:08:11.961 "data_offset": 2048, 00:08:11.961 "data_size": 63488 00:08:11.961 }, 00:08:11.961 { 00:08:11.961 "name": "pt2", 00:08:11.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.961 "is_configured": true, 00:08:11.961 "data_offset": 2048, 00:08:11.961 "data_size": 63488 00:08:11.961 } 00:08:11.961 ] 00:08:11.961 } 00:08:11.961 } 00:08:11.961 }' 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:11.961 pt2' 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.961 [2024-09-30 12:25:23.823240] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.961 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da ']' 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.222 [2024-09-30 12:25:23.870926] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.222 [2024-09-30 12:25:23.871009] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.222 [2024-09-30 12:25:23.871122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.222 [2024-09-30 12:25:23.871187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.222 [2024-09-30 12:25:23.871228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:12.222 12:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.222 [2024-09-30 12:25:24.010699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:12.222 [2024-09-30 12:25:24.012467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:12.222 [2024-09-30 12:25:24.012530] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:12.222 [2024-09-30 12:25:24.012580] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:12.222 [2024-09-30 12:25:24.012597] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.222 [2024-09-30 12:25:24.012608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:12.222 request: 00:08:12.222 { 00:08:12.222 "name": "raid_bdev1", 00:08:12.222 "raid_level": "raid0", 00:08:12.222 "base_bdevs": [ 00:08:12.222 "malloc1", 00:08:12.222 "malloc2" 00:08:12.222 ], 00:08:12.222 "strip_size_kb": 64, 00:08:12.222 "superblock": false, 00:08:12.222 "method": "bdev_raid_create", 00:08:12.222 "req_id": 1 00:08:12.222 } 00:08:12.222 Got JSON-RPC error response 00:08:12.222 response: 00:08:12.222 { 00:08:12.222 "code": -17, 00:08:12.222 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:12.222 } 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.222 [2024-09-30 12:25:24.074568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.222 [2024-09-30 12:25:24.074686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.222 [2024-09-30 12:25:24.074728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:12.222 [2024-09-30 12:25:24.074803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.222 [2024-09-30 12:25:24.077116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.222 [2024-09-30 12:25:24.077205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.222 [2024-09-30 12:25:24.077314] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:12.222 [2024-09-30 12:25:24.077415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:12.222 pt1 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.222 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.223 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.482 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.482 "name": "raid_bdev1", 00:08:12.482 "uuid": "648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da", 00:08:12.482 "strip_size_kb": 64, 00:08:12.482 "state": "configuring", 00:08:12.482 "raid_level": "raid0", 00:08:12.482 "superblock": true, 00:08:12.482 "num_base_bdevs": 2, 00:08:12.482 "num_base_bdevs_discovered": 1, 00:08:12.482 "num_base_bdevs_operational": 2, 00:08:12.482 "base_bdevs_list": [ 00:08:12.482 { 00:08:12.482 "name": "pt1", 00:08:12.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.482 "is_configured": true, 00:08:12.482 "data_offset": 2048, 00:08:12.482 "data_size": 63488 00:08:12.482 }, 00:08:12.482 { 00:08:12.482 "name": null, 00:08:12.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.482 "is_configured": false, 00:08:12.482 "data_offset": 2048, 00:08:12.482 "data_size": 63488 00:08:12.482 } 00:08:12.482 ] 00:08:12.482 }' 00:08:12.482 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.482 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.742 [2024-09-30 12:25:24.505864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:12.742 [2024-09-30 12:25:24.506014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.742 [2024-09-30 12:25:24.506060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:12.742 [2024-09-30 12:25:24.506098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.742 [2024-09-30 12:25:24.506628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.742 [2024-09-30 12:25:24.506711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:12.742 [2024-09-30 12:25:24.506853] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:12.742 [2024-09-30 12:25:24.506920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:12.742 [2024-09-30 12:25:24.507069] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.742 [2024-09-30 12:25:24.507115] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:12.742 [2024-09-30 12:25:24.507397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:12.742 [2024-09-30 12:25:24.507616] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.742 [2024-09-30 12:25:24.507666] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:12.742 [2024-09-30 12:25:24.507873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.742 pt2 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.742 "name": "raid_bdev1", 00:08:12.742 "uuid": "648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da", 00:08:12.742 "strip_size_kb": 64, 00:08:12.742 "state": "online", 00:08:12.742 "raid_level": "raid0", 00:08:12.742 "superblock": true, 00:08:12.742 "num_base_bdevs": 2, 00:08:12.742 "num_base_bdevs_discovered": 2, 00:08:12.742 "num_base_bdevs_operational": 2, 00:08:12.742 "base_bdevs_list": [ 00:08:12.742 { 00:08:12.742 "name": "pt1", 00:08:12.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.742 "is_configured": true, 00:08:12.742 "data_offset": 2048, 00:08:12.742 "data_size": 63488 00:08:12.742 }, 00:08:12.742 { 00:08:12.742 "name": "pt2", 00:08:12.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.742 "is_configured": true, 00:08:12.742 "data_offset": 2048, 00:08:12.742 "data_size": 63488 00:08:12.742 } 00:08:12.742 ] 00:08:12.742 }' 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.742 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.312 [2024-09-30 12:25:24.953296] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.312 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.312 "name": "raid_bdev1", 00:08:13.312 "aliases": [ 00:08:13.312 "648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da" 00:08:13.312 ], 00:08:13.312 "product_name": "Raid Volume", 00:08:13.312 "block_size": 512, 00:08:13.312 "num_blocks": 126976, 00:08:13.312 "uuid": "648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da", 00:08:13.312 "assigned_rate_limits": { 00:08:13.312 "rw_ios_per_sec": 0, 00:08:13.312 "rw_mbytes_per_sec": 0, 00:08:13.312 "r_mbytes_per_sec": 0, 00:08:13.312 "w_mbytes_per_sec": 0 00:08:13.312 }, 00:08:13.312 "claimed": false, 00:08:13.312 "zoned": false, 00:08:13.312 "supported_io_types": { 00:08:13.312 "read": true, 00:08:13.312 "write": true, 00:08:13.312 "unmap": true, 00:08:13.312 "flush": true, 00:08:13.312 "reset": true, 00:08:13.312 "nvme_admin": false, 00:08:13.312 "nvme_io": false, 00:08:13.312 "nvme_io_md": false, 00:08:13.312 "write_zeroes": true, 00:08:13.312 "zcopy": false, 00:08:13.312 "get_zone_info": false, 00:08:13.312 "zone_management": false, 00:08:13.312 "zone_append": false, 00:08:13.313 "compare": false, 00:08:13.313 "compare_and_write": false, 00:08:13.313 "abort": false, 00:08:13.313 "seek_hole": false, 00:08:13.313 "seek_data": false, 00:08:13.313 "copy": false, 00:08:13.313 "nvme_iov_md": false 00:08:13.313 }, 00:08:13.313 "memory_domains": [ 00:08:13.313 { 00:08:13.313 "dma_device_id": "system", 00:08:13.313 "dma_device_type": 1 00:08:13.313 }, 00:08:13.313 { 00:08:13.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.313 "dma_device_type": 2 00:08:13.313 }, 00:08:13.313 { 00:08:13.313 "dma_device_id": "system", 00:08:13.313 "dma_device_type": 1 00:08:13.313 }, 00:08:13.313 { 00:08:13.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.313 "dma_device_type": 2 00:08:13.313 } 00:08:13.313 ], 00:08:13.313 "driver_specific": { 00:08:13.313 "raid": { 00:08:13.313 "uuid": "648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da", 00:08:13.313 "strip_size_kb": 64, 00:08:13.313 "state": "online", 00:08:13.313 "raid_level": "raid0", 00:08:13.313 "superblock": true, 00:08:13.313 "num_base_bdevs": 2, 00:08:13.313 "num_base_bdevs_discovered": 2, 00:08:13.313 "num_base_bdevs_operational": 2, 00:08:13.313 "base_bdevs_list": [ 00:08:13.313 { 00:08:13.313 "name": "pt1", 00:08:13.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.313 "is_configured": true, 00:08:13.313 "data_offset": 2048, 00:08:13.313 "data_size": 63488 00:08:13.313 }, 00:08:13.313 { 00:08:13.313 "name": "pt2", 00:08:13.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.313 "is_configured": true, 00:08:13.313 "data_offset": 2048, 00:08:13.313 "data_size": 63488 00:08:13.313 } 00:08:13.313 ] 00:08:13.313 } 00:08:13.313 } 00:08:13.313 }' 00:08:13.313 12:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:13.313 pt2' 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:13.313 [2024-09-30 12:25:25.160939] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da '!=' 648eb9c4-dfe8-4a7a-9b3e-8f1c2030f4da ']' 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61101 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61101 ']' 00:08:13.313 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61101 00:08:13.573 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:13.573 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.573 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61101 00:08:13.573 killing process with pid 61101 00:08:13.573 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.573 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.573 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61101' 00:08:13.573 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61101 00:08:13.573 [2024-09-30 12:25:25.246766] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.573 [2024-09-30 12:25:25.246870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.573 [2024-09-30 12:25:25.246919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.573 [2024-09-30 12:25:25.246934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:13.573 12:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61101 00:08:13.573 [2024-09-30 12:25:25.439943] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.954 ************************************ 00:08:14.954 END TEST raid_superblock_test 00:08:14.954 ************************************ 00:08:14.954 12:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:14.954 00:08:14.954 real 0m4.532s 00:08:14.954 user 0m6.258s 00:08:14.954 sys 0m0.790s 00:08:14.954 12:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.954 12:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.954 12:25:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:14.954 12:25:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:14.954 12:25:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.954 12:25:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.954 ************************************ 00:08:14.954 START TEST raid_read_error_test 00:08:14.954 ************************************ 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9dH0Tf7vvb 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61312 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61312 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61312 ']' 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.954 12:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.954 [2024-09-30 12:25:26.791268] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:14.954 [2024-09-30 12:25:26.791375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61312 ] 00:08:15.213 [2024-09-30 12:25:26.935496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.472 [2024-09-30 12:25:27.132707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.472 [2024-09-30 12:25:27.325625] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.472 [2024-09-30 12:25:27.325671] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.732 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.732 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:15.732 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.732 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:15.732 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.732 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.992 BaseBdev1_malloc 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.992 true 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.992 [2024-09-30 12:25:27.680690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:15.992 [2024-09-30 12:25:27.680843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.992 [2024-09-30 12:25:27.680888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:15.992 [2024-09-30 12:25:27.680928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.992 [2024-09-30 12:25:27.683043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.992 [2024-09-30 12:25:27.683144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:15.992 BaseBdev1 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.992 BaseBdev2_malloc 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.992 true 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.992 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.992 [2024-09-30 12:25:27.755619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:15.992 [2024-09-30 12:25:27.755725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.992 [2024-09-30 12:25:27.755777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:15.992 [2024-09-30 12:25:27.755836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.992 [2024-09-30 12:25:27.757860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.992 [2024-09-30 12:25:27.757956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:15.992 BaseBdev2 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.993 [2024-09-30 12:25:27.767663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.993 [2024-09-30 12:25:27.769495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.993 [2024-09-30 12:25:27.769776] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:15.993 [2024-09-30 12:25:27.769834] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:15.993 [2024-09-30 12:25:27.770097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:15.993 [2024-09-30 12:25:27.770308] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:15.993 [2024-09-30 12:25:27.770357] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:15.993 [2024-09-30 12:25:27.770565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.993 "name": "raid_bdev1", 00:08:15.993 "uuid": "375c8c10-bad1-4029-b68b-493f9c262341", 00:08:15.993 "strip_size_kb": 64, 00:08:15.993 "state": "online", 00:08:15.993 "raid_level": "raid0", 00:08:15.993 "superblock": true, 00:08:15.993 "num_base_bdevs": 2, 00:08:15.993 "num_base_bdevs_discovered": 2, 00:08:15.993 "num_base_bdevs_operational": 2, 00:08:15.993 "base_bdevs_list": [ 00:08:15.993 { 00:08:15.993 "name": "BaseBdev1", 00:08:15.993 "uuid": "85a6a4bb-a93e-57d0-bf67-ee6ffd9cd9c6", 00:08:15.993 "is_configured": true, 00:08:15.993 "data_offset": 2048, 00:08:15.993 "data_size": 63488 00:08:15.993 }, 00:08:15.993 { 00:08:15.993 "name": "BaseBdev2", 00:08:15.993 "uuid": "0ca872bd-e8dc-5ea5-b9ad-d02527f2c669", 00:08:15.993 "is_configured": true, 00:08:15.993 "data_offset": 2048, 00:08:15.993 "data_size": 63488 00:08:15.993 } 00:08:15.993 ] 00:08:15.993 }' 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.993 12:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.562 12:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:16.562 12:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:16.562 [2024-09-30 12:25:28.336129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:17.500 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:17.500 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.500 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.500 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.500 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:17.500 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.501 "name": "raid_bdev1", 00:08:17.501 "uuid": "375c8c10-bad1-4029-b68b-493f9c262341", 00:08:17.501 "strip_size_kb": 64, 00:08:17.501 "state": "online", 00:08:17.501 "raid_level": "raid0", 00:08:17.501 "superblock": true, 00:08:17.501 "num_base_bdevs": 2, 00:08:17.501 "num_base_bdevs_discovered": 2, 00:08:17.501 "num_base_bdevs_operational": 2, 00:08:17.501 "base_bdevs_list": [ 00:08:17.501 { 00:08:17.501 "name": "BaseBdev1", 00:08:17.501 "uuid": "85a6a4bb-a93e-57d0-bf67-ee6ffd9cd9c6", 00:08:17.501 "is_configured": true, 00:08:17.501 "data_offset": 2048, 00:08:17.501 "data_size": 63488 00:08:17.501 }, 00:08:17.501 { 00:08:17.501 "name": "BaseBdev2", 00:08:17.501 "uuid": "0ca872bd-e8dc-5ea5-b9ad-d02527f2c669", 00:08:17.501 "is_configured": true, 00:08:17.501 "data_offset": 2048, 00:08:17.501 "data_size": 63488 00:08:17.501 } 00:08:17.501 ] 00:08:17.501 }' 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.501 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.070 [2024-09-30 12:25:29.738284] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.070 [2024-09-30 12:25:29.738385] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.070 [2024-09-30 12:25:29.741020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.070 [2024-09-30 12:25:29.741112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.070 [2024-09-30 12:25:29.741167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.070 [2024-09-30 12:25:29.741215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:18.070 { 00:08:18.070 "results": [ 00:08:18.070 { 00:08:18.070 "job": "raid_bdev1", 00:08:18.070 "core_mask": "0x1", 00:08:18.070 "workload": "randrw", 00:08:18.070 "percentage": 50, 00:08:18.070 "status": "finished", 00:08:18.070 "queue_depth": 1, 00:08:18.070 "io_size": 131072, 00:08:18.070 "runtime": 1.403139, 00:08:18.070 "iops": 16677.60642388245, 00:08:18.070 "mibps": 2084.7008029853064, 00:08:18.070 "io_failed": 1, 00:08:18.070 "io_timeout": 0, 00:08:18.070 "avg_latency_us": 83.16820217284456, 00:08:18.070 "min_latency_us": 25.4882096069869, 00:08:18.070 "max_latency_us": 1395.1441048034935 00:08:18.070 } 00:08:18.070 ], 00:08:18.070 "core_count": 1 00:08:18.070 } 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61312 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61312 ']' 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61312 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61312 00:08:18.070 killing process with pid 61312 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61312' 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61312 00:08:18.070 [2024-09-30 12:25:29.785535] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.070 12:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61312 00:08:18.070 [2024-09-30 12:25:29.924874] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.452 12:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9dH0Tf7vvb 00:08:19.452 12:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:19.452 12:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:19.452 12:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:19.452 12:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:19.452 12:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:19.452 12:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:19.452 12:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:19.452 00:08:19.452 real 0m4.493s 00:08:19.452 user 0m5.380s 00:08:19.452 sys 0m0.554s 00:08:19.452 ************************************ 00:08:19.452 END TEST raid_read_error_test 00:08:19.452 ************************************ 00:08:19.452 12:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.452 12:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.452 12:25:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:19.452 12:25:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:19.452 12:25:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.452 12:25:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.452 ************************************ 00:08:19.452 START TEST raid_write_error_test 00:08:19.452 ************************************ 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:19.452 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.z8728qjtnN 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61459 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61459 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61459 ']' 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.453 12:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.713 [2024-09-30 12:25:31.354897] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:19.713 [2024-09-30 12:25:31.355097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61459 ] 00:08:19.713 [2024-09-30 12:25:31.516358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.972 [2024-09-30 12:25:31.708693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.232 [2024-09-30 12:25:31.880720] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.232 [2024-09-30 12:25:31.880774] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.492 BaseBdev1_malloc 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.492 true 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.492 [2024-09-30 12:25:32.228967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:20.492 [2024-09-30 12:25:32.229106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.492 [2024-09-30 12:25:32.229146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:20.492 [2024-09-30 12:25:32.229182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.492 [2024-09-30 12:25:32.231292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.492 [2024-09-30 12:25:32.231395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:20.492 BaseBdev1 00:08:20.492 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.493 BaseBdev2_malloc 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.493 true 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.493 [2024-09-30 12:25:32.305614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:20.493 [2024-09-30 12:25:32.305725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.493 [2024-09-30 12:25:32.305780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:20.493 [2024-09-30 12:25:32.305842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.493 [2024-09-30 12:25:32.308134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.493 [2024-09-30 12:25:32.308226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:20.493 BaseBdev2 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.493 [2024-09-30 12:25:32.317671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.493 [2024-09-30 12:25:32.319666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.493 [2024-09-30 12:25:32.319925] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:20.493 [2024-09-30 12:25:32.319986] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:20.493 [2024-09-30 12:25:32.320251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:20.493 [2024-09-30 12:25:32.320468] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:20.493 [2024-09-30 12:25:32.320519] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:20.493 [2024-09-30 12:25:32.320723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.493 "name": "raid_bdev1", 00:08:20.493 "uuid": "42e1d44f-5468-4cf2-9fac-0a3162ed7c43", 00:08:20.493 "strip_size_kb": 64, 00:08:20.493 "state": "online", 00:08:20.493 "raid_level": "raid0", 00:08:20.493 "superblock": true, 00:08:20.493 "num_base_bdevs": 2, 00:08:20.493 "num_base_bdevs_discovered": 2, 00:08:20.493 "num_base_bdevs_operational": 2, 00:08:20.493 "base_bdevs_list": [ 00:08:20.493 { 00:08:20.493 "name": "BaseBdev1", 00:08:20.493 "uuid": "566939c6-df7e-5687-b0de-dcb9e16095a8", 00:08:20.493 "is_configured": true, 00:08:20.493 "data_offset": 2048, 00:08:20.493 "data_size": 63488 00:08:20.493 }, 00:08:20.493 { 00:08:20.493 "name": "BaseBdev2", 00:08:20.493 "uuid": "b7d2573b-8815-5b9a-a084-deb79b402502", 00:08:20.493 "is_configured": true, 00:08:20.493 "data_offset": 2048, 00:08:20.493 "data_size": 63488 00:08:20.493 } 00:08:20.493 ] 00:08:20.493 }' 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.493 12:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.062 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:21.062 12:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:21.062 [2024-09-30 12:25:32.830113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.999 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.000 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.000 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.000 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.000 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.000 12:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.000 12:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.000 12:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.000 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.000 "name": "raid_bdev1", 00:08:22.000 "uuid": "42e1d44f-5468-4cf2-9fac-0a3162ed7c43", 00:08:22.000 "strip_size_kb": 64, 00:08:22.000 "state": "online", 00:08:22.000 "raid_level": "raid0", 00:08:22.000 "superblock": true, 00:08:22.000 "num_base_bdevs": 2, 00:08:22.000 "num_base_bdevs_discovered": 2, 00:08:22.000 "num_base_bdevs_operational": 2, 00:08:22.000 "base_bdevs_list": [ 00:08:22.000 { 00:08:22.000 "name": "BaseBdev1", 00:08:22.000 "uuid": "566939c6-df7e-5687-b0de-dcb9e16095a8", 00:08:22.000 "is_configured": true, 00:08:22.000 "data_offset": 2048, 00:08:22.000 "data_size": 63488 00:08:22.000 }, 00:08:22.000 { 00:08:22.000 "name": "BaseBdev2", 00:08:22.000 "uuid": "b7d2573b-8815-5b9a-a084-deb79b402502", 00:08:22.000 "is_configured": true, 00:08:22.000 "data_offset": 2048, 00:08:22.000 "data_size": 63488 00:08:22.000 } 00:08:22.000 ] 00:08:22.000 }' 00:08:22.000 12:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.000 12:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.570 [2024-09-30 12:25:34.220239] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.570 [2024-09-30 12:25:34.220338] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.570 [2024-09-30 12:25:34.222923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.570 [2024-09-30 12:25:34.222971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.570 [2024-09-30 12:25:34.223005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.570 [2024-09-30 12:25:34.223018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:22.570 { 00:08:22.570 "results": [ 00:08:22.570 { 00:08:22.570 "job": "raid_bdev1", 00:08:22.570 "core_mask": "0x1", 00:08:22.570 "workload": "randrw", 00:08:22.570 "percentage": 50, 00:08:22.570 "status": "finished", 00:08:22.570 "queue_depth": 1, 00:08:22.570 "io_size": 131072, 00:08:22.570 "runtime": 1.391092, 00:08:22.570 "iops": 16864.448936518937, 00:08:22.570 "mibps": 2108.056117064867, 00:08:22.570 "io_failed": 1, 00:08:22.570 "io_timeout": 0, 00:08:22.570 "avg_latency_us": 82.18010504844145, 00:08:22.570 "min_latency_us": 25.152838427947597, 00:08:22.570 "max_latency_us": 1330.7528384279476 00:08:22.570 } 00:08:22.570 ], 00:08:22.570 "core_count": 1 00:08:22.570 } 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61459 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61459 ']' 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61459 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61459 00:08:22.570 killing process with pid 61459 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61459' 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61459 00:08:22.570 [2024-09-30 12:25:34.262141] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.570 12:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61459 00:08:22.570 [2024-09-30 12:25:34.390761] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.953 12:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.z8728qjtnN 00:08:23.953 12:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:23.953 12:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:23.953 12:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:23.953 12:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:23.953 12:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.953 12:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.953 12:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:23.953 00:08:23.953 real 0m4.404s 00:08:23.953 user 0m5.218s 00:08:23.953 sys 0m0.538s 00:08:23.953 12:25:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.953 ************************************ 00:08:23.953 END TEST raid_write_error_test 00:08:23.953 ************************************ 00:08:23.953 12:25:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.953 12:25:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:23.953 12:25:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:23.953 12:25:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:23.953 12:25:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.953 12:25:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.953 ************************************ 00:08:23.953 START TEST raid_state_function_test 00:08:23.953 ************************************ 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61597 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61597' 00:08:23.953 Process raid pid: 61597 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61597 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61597 ']' 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.953 12:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.953 [2024-09-30 12:25:35.829117] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:23.953 [2024-09-30 12:25:35.829317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.213 [2024-09-30 12:25:35.992248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.471 [2024-09-30 12:25:36.185465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.730 [2024-09-30 12:25:36.381852] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.730 [2024-09-30 12:25:36.381945] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.989 [2024-09-30 12:25:36.648257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.989 [2024-09-30 12:25:36.648319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.989 [2024-09-30 12:25:36.648330] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.989 [2024-09-30 12:25:36.648343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.989 "name": "Existed_Raid", 00:08:24.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.989 "strip_size_kb": 64, 00:08:24.989 "state": "configuring", 00:08:24.989 "raid_level": "concat", 00:08:24.989 "superblock": false, 00:08:24.989 "num_base_bdevs": 2, 00:08:24.989 "num_base_bdevs_discovered": 0, 00:08:24.989 "num_base_bdevs_operational": 2, 00:08:24.989 "base_bdevs_list": [ 00:08:24.989 { 00:08:24.989 "name": "BaseBdev1", 00:08:24.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.989 "is_configured": false, 00:08:24.989 "data_offset": 0, 00:08:24.989 "data_size": 0 00:08:24.989 }, 00:08:24.989 { 00:08:24.989 "name": "BaseBdev2", 00:08:24.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.989 "is_configured": false, 00:08:24.989 "data_offset": 0, 00:08:24.989 "data_size": 0 00:08:24.989 } 00:08:24.989 ] 00:08:24.989 }' 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.989 12:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.290 [2024-09-30 12:25:37.067568] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.290 [2024-09-30 12:25:37.067656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.290 [2024-09-30 12:25:37.079592] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.290 [2024-09-30 12:25:37.079698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.290 [2024-09-30 12:25:37.079734] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.290 [2024-09-30 12:25:37.079796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.290 [2024-09-30 12:25:37.159817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.290 BaseBdev1 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.290 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.290 [ 00:08:25.290 { 00:08:25.290 "name": "BaseBdev1", 00:08:25.290 "aliases": [ 00:08:25.290 "15eb6a45-42dc-42b7-b47a-66f621412957" 00:08:25.290 ], 00:08:25.549 "product_name": "Malloc disk", 00:08:25.549 "block_size": 512, 00:08:25.549 "num_blocks": 65536, 00:08:25.549 "uuid": "15eb6a45-42dc-42b7-b47a-66f621412957", 00:08:25.549 "assigned_rate_limits": { 00:08:25.549 "rw_ios_per_sec": 0, 00:08:25.549 "rw_mbytes_per_sec": 0, 00:08:25.549 "r_mbytes_per_sec": 0, 00:08:25.549 "w_mbytes_per_sec": 0 00:08:25.549 }, 00:08:25.549 "claimed": true, 00:08:25.549 "claim_type": "exclusive_write", 00:08:25.549 "zoned": false, 00:08:25.549 "supported_io_types": { 00:08:25.549 "read": true, 00:08:25.549 "write": true, 00:08:25.549 "unmap": true, 00:08:25.549 "flush": true, 00:08:25.549 "reset": true, 00:08:25.549 "nvme_admin": false, 00:08:25.549 "nvme_io": false, 00:08:25.549 "nvme_io_md": false, 00:08:25.549 "write_zeroes": true, 00:08:25.549 "zcopy": true, 00:08:25.549 "get_zone_info": false, 00:08:25.549 "zone_management": false, 00:08:25.549 "zone_append": false, 00:08:25.549 "compare": false, 00:08:25.549 "compare_and_write": false, 00:08:25.549 "abort": true, 00:08:25.549 "seek_hole": false, 00:08:25.549 "seek_data": false, 00:08:25.549 "copy": true, 00:08:25.549 "nvme_iov_md": false 00:08:25.549 }, 00:08:25.549 "memory_domains": [ 00:08:25.549 { 00:08:25.549 "dma_device_id": "system", 00:08:25.549 "dma_device_type": 1 00:08:25.549 }, 00:08:25.549 { 00:08:25.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.549 "dma_device_type": 2 00:08:25.549 } 00:08:25.549 ], 00:08:25.549 "driver_specific": {} 00:08:25.549 } 00:08:25.549 ] 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.549 "name": "Existed_Raid", 00:08:25.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.549 "strip_size_kb": 64, 00:08:25.549 "state": "configuring", 00:08:25.549 "raid_level": "concat", 00:08:25.549 "superblock": false, 00:08:25.549 "num_base_bdevs": 2, 00:08:25.549 "num_base_bdevs_discovered": 1, 00:08:25.549 "num_base_bdevs_operational": 2, 00:08:25.549 "base_bdevs_list": [ 00:08:25.549 { 00:08:25.549 "name": "BaseBdev1", 00:08:25.549 "uuid": "15eb6a45-42dc-42b7-b47a-66f621412957", 00:08:25.549 "is_configured": true, 00:08:25.549 "data_offset": 0, 00:08:25.549 "data_size": 65536 00:08:25.549 }, 00:08:25.549 { 00:08:25.549 "name": "BaseBdev2", 00:08:25.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.549 "is_configured": false, 00:08:25.549 "data_offset": 0, 00:08:25.549 "data_size": 0 00:08:25.549 } 00:08:25.549 ] 00:08:25.549 }' 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.549 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.809 [2024-09-30 12:25:37.635015] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.809 [2024-09-30 12:25:37.635068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.809 [2024-09-30 12:25:37.647031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.809 [2024-09-30 12:25:37.648988] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.809 [2024-09-30 12:25:37.649079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.809 "name": "Existed_Raid", 00:08:25.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.809 "strip_size_kb": 64, 00:08:25.809 "state": "configuring", 00:08:25.809 "raid_level": "concat", 00:08:25.809 "superblock": false, 00:08:25.809 "num_base_bdevs": 2, 00:08:25.809 "num_base_bdevs_discovered": 1, 00:08:25.809 "num_base_bdevs_operational": 2, 00:08:25.809 "base_bdevs_list": [ 00:08:25.809 { 00:08:25.809 "name": "BaseBdev1", 00:08:25.809 "uuid": "15eb6a45-42dc-42b7-b47a-66f621412957", 00:08:25.809 "is_configured": true, 00:08:25.809 "data_offset": 0, 00:08:25.809 "data_size": 65536 00:08:25.809 }, 00:08:25.809 { 00:08:25.809 "name": "BaseBdev2", 00:08:25.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.809 "is_configured": false, 00:08:25.809 "data_offset": 0, 00:08:25.809 "data_size": 0 00:08:25.809 } 00:08:25.809 ] 00:08:25.809 }' 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.809 12:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.397 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:26.397 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.397 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.397 [2024-09-30 12:25:38.106949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.397 [2024-09-30 12:25:38.107099] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:26.398 [2024-09-30 12:25:38.107130] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:26.398 [2024-09-30 12:25:38.107454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:26.398 [2024-09-30 12:25:38.107677] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:26.398 [2024-09-30 12:25:38.107730] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:26.398 [2024-09-30 12:25:38.108084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.398 BaseBdev2 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.398 [ 00:08:26.398 { 00:08:26.398 "name": "BaseBdev2", 00:08:26.398 "aliases": [ 00:08:26.398 "e1731eea-7b04-4ece-80a4-21bb280fb888" 00:08:26.398 ], 00:08:26.398 "product_name": "Malloc disk", 00:08:26.398 "block_size": 512, 00:08:26.398 "num_blocks": 65536, 00:08:26.398 "uuid": "e1731eea-7b04-4ece-80a4-21bb280fb888", 00:08:26.398 "assigned_rate_limits": { 00:08:26.398 "rw_ios_per_sec": 0, 00:08:26.398 "rw_mbytes_per_sec": 0, 00:08:26.398 "r_mbytes_per_sec": 0, 00:08:26.398 "w_mbytes_per_sec": 0 00:08:26.398 }, 00:08:26.398 "claimed": true, 00:08:26.398 "claim_type": "exclusive_write", 00:08:26.398 "zoned": false, 00:08:26.398 "supported_io_types": { 00:08:26.398 "read": true, 00:08:26.398 "write": true, 00:08:26.398 "unmap": true, 00:08:26.398 "flush": true, 00:08:26.398 "reset": true, 00:08:26.398 "nvme_admin": false, 00:08:26.398 "nvme_io": false, 00:08:26.398 "nvme_io_md": false, 00:08:26.398 "write_zeroes": true, 00:08:26.398 "zcopy": true, 00:08:26.398 "get_zone_info": false, 00:08:26.398 "zone_management": false, 00:08:26.398 "zone_append": false, 00:08:26.398 "compare": false, 00:08:26.398 "compare_and_write": false, 00:08:26.398 "abort": true, 00:08:26.398 "seek_hole": false, 00:08:26.398 "seek_data": false, 00:08:26.398 "copy": true, 00:08:26.398 "nvme_iov_md": false 00:08:26.398 }, 00:08:26.398 "memory_domains": [ 00:08:26.398 { 00:08:26.398 "dma_device_id": "system", 00:08:26.398 "dma_device_type": 1 00:08:26.398 }, 00:08:26.398 { 00:08:26.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.398 "dma_device_type": 2 00:08:26.398 } 00:08:26.398 ], 00:08:26.398 "driver_specific": {} 00:08:26.398 } 00:08:26.398 ] 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.398 "name": "Existed_Raid", 00:08:26.398 "uuid": "cc5da04a-ca1b-4f4c-8815-cefbd606a1bb", 00:08:26.398 "strip_size_kb": 64, 00:08:26.398 "state": "online", 00:08:26.398 "raid_level": "concat", 00:08:26.398 "superblock": false, 00:08:26.398 "num_base_bdevs": 2, 00:08:26.398 "num_base_bdevs_discovered": 2, 00:08:26.398 "num_base_bdevs_operational": 2, 00:08:26.398 "base_bdevs_list": [ 00:08:26.398 { 00:08:26.398 "name": "BaseBdev1", 00:08:26.398 "uuid": "15eb6a45-42dc-42b7-b47a-66f621412957", 00:08:26.398 "is_configured": true, 00:08:26.398 "data_offset": 0, 00:08:26.398 "data_size": 65536 00:08:26.398 }, 00:08:26.398 { 00:08:26.398 "name": "BaseBdev2", 00:08:26.398 "uuid": "e1731eea-7b04-4ece-80a4-21bb280fb888", 00:08:26.398 "is_configured": true, 00:08:26.398 "data_offset": 0, 00:08:26.398 "data_size": 65536 00:08:26.398 } 00:08:26.398 ] 00:08:26.398 }' 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.398 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.997 [2024-09-30 12:25:38.590399] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.997 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.997 "name": "Existed_Raid", 00:08:26.997 "aliases": [ 00:08:26.997 "cc5da04a-ca1b-4f4c-8815-cefbd606a1bb" 00:08:26.997 ], 00:08:26.997 "product_name": "Raid Volume", 00:08:26.997 "block_size": 512, 00:08:26.997 "num_blocks": 131072, 00:08:26.997 "uuid": "cc5da04a-ca1b-4f4c-8815-cefbd606a1bb", 00:08:26.997 "assigned_rate_limits": { 00:08:26.997 "rw_ios_per_sec": 0, 00:08:26.997 "rw_mbytes_per_sec": 0, 00:08:26.997 "r_mbytes_per_sec": 0, 00:08:26.997 "w_mbytes_per_sec": 0 00:08:26.997 }, 00:08:26.997 "claimed": false, 00:08:26.997 "zoned": false, 00:08:26.997 "supported_io_types": { 00:08:26.997 "read": true, 00:08:26.997 "write": true, 00:08:26.997 "unmap": true, 00:08:26.997 "flush": true, 00:08:26.997 "reset": true, 00:08:26.997 "nvme_admin": false, 00:08:26.997 "nvme_io": false, 00:08:26.997 "nvme_io_md": false, 00:08:26.997 "write_zeroes": true, 00:08:26.997 "zcopy": false, 00:08:26.997 "get_zone_info": false, 00:08:26.997 "zone_management": false, 00:08:26.997 "zone_append": false, 00:08:26.997 "compare": false, 00:08:26.997 "compare_and_write": false, 00:08:26.997 "abort": false, 00:08:26.997 "seek_hole": false, 00:08:26.997 "seek_data": false, 00:08:26.997 "copy": false, 00:08:26.997 "nvme_iov_md": false 00:08:26.997 }, 00:08:26.997 "memory_domains": [ 00:08:26.997 { 00:08:26.997 "dma_device_id": "system", 00:08:26.997 "dma_device_type": 1 00:08:26.997 }, 00:08:26.997 { 00:08:26.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.997 "dma_device_type": 2 00:08:26.997 }, 00:08:26.997 { 00:08:26.997 "dma_device_id": "system", 00:08:26.997 "dma_device_type": 1 00:08:26.997 }, 00:08:26.997 { 00:08:26.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.997 "dma_device_type": 2 00:08:26.997 } 00:08:26.997 ], 00:08:26.997 "driver_specific": { 00:08:26.997 "raid": { 00:08:26.997 "uuid": "cc5da04a-ca1b-4f4c-8815-cefbd606a1bb", 00:08:26.997 "strip_size_kb": 64, 00:08:26.997 "state": "online", 00:08:26.997 "raid_level": "concat", 00:08:26.997 "superblock": false, 00:08:26.997 "num_base_bdevs": 2, 00:08:26.997 "num_base_bdevs_discovered": 2, 00:08:26.997 "num_base_bdevs_operational": 2, 00:08:26.997 "base_bdevs_list": [ 00:08:26.997 { 00:08:26.997 "name": "BaseBdev1", 00:08:26.997 "uuid": "15eb6a45-42dc-42b7-b47a-66f621412957", 00:08:26.997 "is_configured": true, 00:08:26.997 "data_offset": 0, 00:08:26.997 "data_size": 65536 00:08:26.997 }, 00:08:26.997 { 00:08:26.998 "name": "BaseBdev2", 00:08:26.998 "uuid": "e1731eea-7b04-4ece-80a4-21bb280fb888", 00:08:26.998 "is_configured": true, 00:08:26.998 "data_offset": 0, 00:08:26.998 "data_size": 65536 00:08:26.998 } 00:08:26.998 ] 00:08:26.998 } 00:08:26.998 } 00:08:26.998 }' 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:26.998 BaseBdev2' 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.998 [2024-09-30 12:25:38.777893] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.998 [2024-09-30 12:25:38.777971] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.998 [2024-09-30 12:25:38.778065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.998 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.258 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.258 "name": "Existed_Raid", 00:08:27.258 "uuid": "cc5da04a-ca1b-4f4c-8815-cefbd606a1bb", 00:08:27.258 "strip_size_kb": 64, 00:08:27.258 "state": "offline", 00:08:27.258 "raid_level": "concat", 00:08:27.258 "superblock": false, 00:08:27.258 "num_base_bdevs": 2, 00:08:27.258 "num_base_bdevs_discovered": 1, 00:08:27.258 "num_base_bdevs_operational": 1, 00:08:27.258 "base_bdevs_list": [ 00:08:27.258 { 00:08:27.258 "name": null, 00:08:27.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.258 "is_configured": false, 00:08:27.258 "data_offset": 0, 00:08:27.258 "data_size": 65536 00:08:27.258 }, 00:08:27.258 { 00:08:27.258 "name": "BaseBdev2", 00:08:27.258 "uuid": "e1731eea-7b04-4ece-80a4-21bb280fb888", 00:08:27.258 "is_configured": true, 00:08:27.258 "data_offset": 0, 00:08:27.258 "data_size": 65536 00:08:27.258 } 00:08:27.258 ] 00:08:27.258 }' 00:08:27.258 12:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.258 12:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.518 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.518 [2024-09-30 12:25:39.371898] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:27.518 [2024-09-30 12:25:39.372020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61597 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61597 ']' 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61597 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61597 00:08:27.778 killing process with pid 61597 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61597' 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61597 00:08:27.778 [2024-09-30 12:25:39.556883] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.778 12:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61597 00:08:27.778 [2024-09-30 12:25:39.573582] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.157 12:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:29.157 00:08:29.157 real 0m5.040s 00:08:29.157 user 0m7.177s 00:08:29.157 sys 0m0.798s 00:08:29.157 12:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.157 12:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.157 ************************************ 00:08:29.157 END TEST raid_state_function_test 00:08:29.157 ************************************ 00:08:29.157 12:25:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:29.157 12:25:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:29.157 12:25:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.157 12:25:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.157 ************************************ 00:08:29.157 START TEST raid_state_function_test_sb 00:08:29.158 ************************************ 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61850 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:29.158 Process raid pid: 61850 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61850' 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61850 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61850 ']' 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.158 12:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.158 [2024-09-30 12:25:40.941487] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:29.158 [2024-09-30 12:25:40.941610] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.417 [2024-09-30 12:25:41.099018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.417 [2024-09-30 12:25:41.301048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.677 [2024-09-30 12:25:41.486668] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.677 [2024-09-30 12:25:41.486709] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.937 [2024-09-30 12:25:41.750428] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.937 [2024-09-30 12:25:41.750540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.937 [2024-09-30 12:25:41.750574] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.937 [2024-09-30 12:25:41.750602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.937 "name": "Existed_Raid", 00:08:29.937 "uuid": "e17baf16-8a47-4f40-8b08-f7a2634baf34", 00:08:29.937 "strip_size_kb": 64, 00:08:29.937 "state": "configuring", 00:08:29.937 "raid_level": "concat", 00:08:29.937 "superblock": true, 00:08:29.937 "num_base_bdevs": 2, 00:08:29.937 "num_base_bdevs_discovered": 0, 00:08:29.937 "num_base_bdevs_operational": 2, 00:08:29.937 "base_bdevs_list": [ 00:08:29.937 { 00:08:29.937 "name": "BaseBdev1", 00:08:29.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.937 "is_configured": false, 00:08:29.937 "data_offset": 0, 00:08:29.937 "data_size": 0 00:08:29.937 }, 00:08:29.937 { 00:08:29.937 "name": "BaseBdev2", 00:08:29.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.937 "is_configured": false, 00:08:29.937 "data_offset": 0, 00:08:29.937 "data_size": 0 00:08:29.937 } 00:08:29.937 ] 00:08:29.937 }' 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.937 12:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.507 [2024-09-30 12:25:42.221506] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.507 [2024-09-30 12:25:42.221611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.507 [2024-09-30 12:25:42.241536] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.507 [2024-09-30 12:25:42.241631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.507 [2024-09-30 12:25:42.241672] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.507 [2024-09-30 12:25:42.241712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.507 [2024-09-30 12:25:42.316520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.507 BaseBdev1 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.507 [ 00:08:30.507 { 00:08:30.507 "name": "BaseBdev1", 00:08:30.507 "aliases": [ 00:08:30.507 "3b9a0a61-baa4-43e2-82df-94ec3d3960cb" 00:08:30.507 ], 00:08:30.507 "product_name": "Malloc disk", 00:08:30.507 "block_size": 512, 00:08:30.507 "num_blocks": 65536, 00:08:30.507 "uuid": "3b9a0a61-baa4-43e2-82df-94ec3d3960cb", 00:08:30.507 "assigned_rate_limits": { 00:08:30.507 "rw_ios_per_sec": 0, 00:08:30.507 "rw_mbytes_per_sec": 0, 00:08:30.507 "r_mbytes_per_sec": 0, 00:08:30.507 "w_mbytes_per_sec": 0 00:08:30.507 }, 00:08:30.507 "claimed": true, 00:08:30.507 "claim_type": "exclusive_write", 00:08:30.507 "zoned": false, 00:08:30.507 "supported_io_types": { 00:08:30.507 "read": true, 00:08:30.507 "write": true, 00:08:30.507 "unmap": true, 00:08:30.507 "flush": true, 00:08:30.507 "reset": true, 00:08:30.507 "nvme_admin": false, 00:08:30.507 "nvme_io": false, 00:08:30.507 "nvme_io_md": false, 00:08:30.507 "write_zeroes": true, 00:08:30.507 "zcopy": true, 00:08:30.507 "get_zone_info": false, 00:08:30.507 "zone_management": false, 00:08:30.507 "zone_append": false, 00:08:30.507 "compare": false, 00:08:30.507 "compare_and_write": false, 00:08:30.507 "abort": true, 00:08:30.507 "seek_hole": false, 00:08:30.507 "seek_data": false, 00:08:30.507 "copy": true, 00:08:30.507 "nvme_iov_md": false 00:08:30.507 }, 00:08:30.507 "memory_domains": [ 00:08:30.507 { 00:08:30.507 "dma_device_id": "system", 00:08:30.507 "dma_device_type": 1 00:08:30.507 }, 00:08:30.507 { 00:08:30.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.507 "dma_device_type": 2 00:08:30.507 } 00:08:30.507 ], 00:08:30.507 "driver_specific": {} 00:08:30.507 } 00:08:30.507 ] 00:08:30.507 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.508 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.768 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.768 "name": "Existed_Raid", 00:08:30.768 "uuid": "0dd39555-2043-44f1-a0b5-64f09f543c27", 00:08:30.768 "strip_size_kb": 64, 00:08:30.768 "state": "configuring", 00:08:30.768 "raid_level": "concat", 00:08:30.768 "superblock": true, 00:08:30.768 "num_base_bdevs": 2, 00:08:30.768 "num_base_bdevs_discovered": 1, 00:08:30.768 "num_base_bdevs_operational": 2, 00:08:30.768 "base_bdevs_list": [ 00:08:30.768 { 00:08:30.768 "name": "BaseBdev1", 00:08:30.768 "uuid": "3b9a0a61-baa4-43e2-82df-94ec3d3960cb", 00:08:30.768 "is_configured": true, 00:08:30.768 "data_offset": 2048, 00:08:30.768 "data_size": 63488 00:08:30.768 }, 00:08:30.768 { 00:08:30.768 "name": "BaseBdev2", 00:08:30.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.768 "is_configured": false, 00:08:30.768 "data_offset": 0, 00:08:30.768 "data_size": 0 00:08:30.768 } 00:08:30.768 ] 00:08:30.768 }' 00:08:30.768 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.768 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.028 [2024-09-30 12:25:42.843689] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:31.028 [2024-09-30 12:25:42.843808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.028 [2024-09-30 12:25:42.855715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.028 [2024-09-30 12:25:42.857567] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.028 [2024-09-30 12:25:42.857657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.028 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.028 "name": "Existed_Raid", 00:08:31.028 "uuid": "620d5238-a611-488f-8218-9e08c04ae28b", 00:08:31.028 "strip_size_kb": 64, 00:08:31.028 "state": "configuring", 00:08:31.028 "raid_level": "concat", 00:08:31.028 "superblock": true, 00:08:31.029 "num_base_bdevs": 2, 00:08:31.029 "num_base_bdevs_discovered": 1, 00:08:31.029 "num_base_bdevs_operational": 2, 00:08:31.029 "base_bdevs_list": [ 00:08:31.029 { 00:08:31.029 "name": "BaseBdev1", 00:08:31.029 "uuid": "3b9a0a61-baa4-43e2-82df-94ec3d3960cb", 00:08:31.029 "is_configured": true, 00:08:31.029 "data_offset": 2048, 00:08:31.029 "data_size": 63488 00:08:31.029 }, 00:08:31.029 { 00:08:31.029 "name": "BaseBdev2", 00:08:31.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.029 "is_configured": false, 00:08:31.029 "data_offset": 0, 00:08:31.029 "data_size": 0 00:08:31.029 } 00:08:31.029 ] 00:08:31.029 }' 00:08:31.029 12:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.029 12:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.598 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:31.598 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.598 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.598 [2024-09-30 12:25:43.337642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.598 [2024-09-30 12:25:43.338072] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:31.598 [2024-09-30 12:25:43.338137] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:31.598 [2024-09-30 12:25:43.338446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:31.598 BaseBdev2 00:08:31.598 [2024-09-30 12:25:43.338649] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:31.598 [2024-09-30 12:25:43.338667] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:31.598 [2024-09-30 12:25:43.338840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.598 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.599 [ 00:08:31.599 { 00:08:31.599 "name": "BaseBdev2", 00:08:31.599 "aliases": [ 00:08:31.599 "ff4a79e5-2fd8-4b4c-9186-9e0a0a5d6da5" 00:08:31.599 ], 00:08:31.599 "product_name": "Malloc disk", 00:08:31.599 "block_size": 512, 00:08:31.599 "num_blocks": 65536, 00:08:31.599 "uuid": "ff4a79e5-2fd8-4b4c-9186-9e0a0a5d6da5", 00:08:31.599 "assigned_rate_limits": { 00:08:31.599 "rw_ios_per_sec": 0, 00:08:31.599 "rw_mbytes_per_sec": 0, 00:08:31.599 "r_mbytes_per_sec": 0, 00:08:31.599 "w_mbytes_per_sec": 0 00:08:31.599 }, 00:08:31.599 "claimed": true, 00:08:31.599 "claim_type": "exclusive_write", 00:08:31.599 "zoned": false, 00:08:31.599 "supported_io_types": { 00:08:31.599 "read": true, 00:08:31.599 "write": true, 00:08:31.599 "unmap": true, 00:08:31.599 "flush": true, 00:08:31.599 "reset": true, 00:08:31.599 "nvme_admin": false, 00:08:31.599 "nvme_io": false, 00:08:31.599 "nvme_io_md": false, 00:08:31.599 "write_zeroes": true, 00:08:31.599 "zcopy": true, 00:08:31.599 "get_zone_info": false, 00:08:31.599 "zone_management": false, 00:08:31.599 "zone_append": false, 00:08:31.599 "compare": false, 00:08:31.599 "compare_and_write": false, 00:08:31.599 "abort": true, 00:08:31.599 "seek_hole": false, 00:08:31.599 "seek_data": false, 00:08:31.599 "copy": true, 00:08:31.599 "nvme_iov_md": false 00:08:31.599 }, 00:08:31.599 "memory_domains": [ 00:08:31.599 { 00:08:31.599 "dma_device_id": "system", 00:08:31.599 "dma_device_type": 1 00:08:31.599 }, 00:08:31.599 { 00:08:31.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.599 "dma_device_type": 2 00:08:31.599 } 00:08:31.599 ], 00:08:31.599 "driver_specific": {} 00:08:31.599 } 00:08:31.599 ] 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.599 "name": "Existed_Raid", 00:08:31.599 "uuid": "620d5238-a611-488f-8218-9e08c04ae28b", 00:08:31.599 "strip_size_kb": 64, 00:08:31.599 "state": "online", 00:08:31.599 "raid_level": "concat", 00:08:31.599 "superblock": true, 00:08:31.599 "num_base_bdevs": 2, 00:08:31.599 "num_base_bdevs_discovered": 2, 00:08:31.599 "num_base_bdevs_operational": 2, 00:08:31.599 "base_bdevs_list": [ 00:08:31.599 { 00:08:31.599 "name": "BaseBdev1", 00:08:31.599 "uuid": "3b9a0a61-baa4-43e2-82df-94ec3d3960cb", 00:08:31.599 "is_configured": true, 00:08:31.599 "data_offset": 2048, 00:08:31.599 "data_size": 63488 00:08:31.599 }, 00:08:31.599 { 00:08:31.599 "name": "BaseBdev2", 00:08:31.599 "uuid": "ff4a79e5-2fd8-4b4c-9186-9e0a0a5d6da5", 00:08:31.599 "is_configured": true, 00:08:31.599 "data_offset": 2048, 00:08:31.599 "data_size": 63488 00:08:31.599 } 00:08:31.599 ] 00:08:31.599 }' 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.599 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.167 [2024-09-30 12:25:43.797195] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.167 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.167 "name": "Existed_Raid", 00:08:32.167 "aliases": [ 00:08:32.167 "620d5238-a611-488f-8218-9e08c04ae28b" 00:08:32.167 ], 00:08:32.167 "product_name": "Raid Volume", 00:08:32.167 "block_size": 512, 00:08:32.167 "num_blocks": 126976, 00:08:32.167 "uuid": "620d5238-a611-488f-8218-9e08c04ae28b", 00:08:32.167 "assigned_rate_limits": { 00:08:32.167 "rw_ios_per_sec": 0, 00:08:32.167 "rw_mbytes_per_sec": 0, 00:08:32.167 "r_mbytes_per_sec": 0, 00:08:32.167 "w_mbytes_per_sec": 0 00:08:32.167 }, 00:08:32.167 "claimed": false, 00:08:32.167 "zoned": false, 00:08:32.167 "supported_io_types": { 00:08:32.167 "read": true, 00:08:32.167 "write": true, 00:08:32.167 "unmap": true, 00:08:32.167 "flush": true, 00:08:32.167 "reset": true, 00:08:32.167 "nvme_admin": false, 00:08:32.167 "nvme_io": false, 00:08:32.167 "nvme_io_md": false, 00:08:32.167 "write_zeroes": true, 00:08:32.167 "zcopy": false, 00:08:32.167 "get_zone_info": false, 00:08:32.167 "zone_management": false, 00:08:32.167 "zone_append": false, 00:08:32.167 "compare": false, 00:08:32.167 "compare_and_write": false, 00:08:32.167 "abort": false, 00:08:32.167 "seek_hole": false, 00:08:32.167 "seek_data": false, 00:08:32.167 "copy": false, 00:08:32.167 "nvme_iov_md": false 00:08:32.167 }, 00:08:32.167 "memory_domains": [ 00:08:32.167 { 00:08:32.167 "dma_device_id": "system", 00:08:32.167 "dma_device_type": 1 00:08:32.167 }, 00:08:32.167 { 00:08:32.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.167 "dma_device_type": 2 00:08:32.167 }, 00:08:32.167 { 00:08:32.167 "dma_device_id": "system", 00:08:32.167 "dma_device_type": 1 00:08:32.167 }, 00:08:32.167 { 00:08:32.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.167 "dma_device_type": 2 00:08:32.167 } 00:08:32.167 ], 00:08:32.167 "driver_specific": { 00:08:32.167 "raid": { 00:08:32.167 "uuid": "620d5238-a611-488f-8218-9e08c04ae28b", 00:08:32.167 "strip_size_kb": 64, 00:08:32.167 "state": "online", 00:08:32.167 "raid_level": "concat", 00:08:32.167 "superblock": true, 00:08:32.167 "num_base_bdevs": 2, 00:08:32.167 "num_base_bdevs_discovered": 2, 00:08:32.167 "num_base_bdevs_operational": 2, 00:08:32.167 "base_bdevs_list": [ 00:08:32.167 { 00:08:32.167 "name": "BaseBdev1", 00:08:32.167 "uuid": "3b9a0a61-baa4-43e2-82df-94ec3d3960cb", 00:08:32.167 "is_configured": true, 00:08:32.167 "data_offset": 2048, 00:08:32.167 "data_size": 63488 00:08:32.167 }, 00:08:32.167 { 00:08:32.167 "name": "BaseBdev2", 00:08:32.167 "uuid": "ff4a79e5-2fd8-4b4c-9186-9e0a0a5d6da5", 00:08:32.167 "is_configured": true, 00:08:32.167 "data_offset": 2048, 00:08:32.167 "data_size": 63488 00:08:32.167 } 00:08:32.167 ] 00:08:32.167 } 00:08:32.168 } 00:08:32.168 }' 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:32.168 BaseBdev2' 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.168 12:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.168 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.168 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.168 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.168 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:32.168 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.168 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.168 [2024-09-30 12:25:44.032528] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:32.168 [2024-09-30 12:25:44.032616] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.168 [2024-09-30 12:25:44.032692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.427 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.428 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.428 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.428 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.428 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.428 "name": "Existed_Raid", 00:08:32.428 "uuid": "620d5238-a611-488f-8218-9e08c04ae28b", 00:08:32.428 "strip_size_kb": 64, 00:08:32.428 "state": "offline", 00:08:32.428 "raid_level": "concat", 00:08:32.428 "superblock": true, 00:08:32.428 "num_base_bdevs": 2, 00:08:32.428 "num_base_bdevs_discovered": 1, 00:08:32.428 "num_base_bdevs_operational": 1, 00:08:32.428 "base_bdevs_list": [ 00:08:32.428 { 00:08:32.428 "name": null, 00:08:32.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.428 "is_configured": false, 00:08:32.428 "data_offset": 0, 00:08:32.428 "data_size": 63488 00:08:32.428 }, 00:08:32.428 { 00:08:32.428 "name": "BaseBdev2", 00:08:32.428 "uuid": "ff4a79e5-2fd8-4b4c-9186-9e0a0a5d6da5", 00:08:32.428 "is_configured": true, 00:08:32.428 "data_offset": 2048, 00:08:32.428 "data_size": 63488 00:08:32.428 } 00:08:32.428 ] 00:08:32.428 }' 00:08:32.428 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.428 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.687 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:32.687 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.688 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.688 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:32.688 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.688 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.688 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.948 [2024-09-30 12:25:44.606253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:32.948 [2024-09-30 12:25:44.606375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61850 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61850 ']' 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61850 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61850 00:08:32.948 killing process with pid 61850 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61850' 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61850 00:08:32.948 [2024-09-30 12:25:44.790435] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.948 12:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61850 00:08:32.948 [2024-09-30 12:25:44.805722] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.329 12:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:34.329 ************************************ 00:08:34.329 END TEST raid_state_function_test_sb 00:08:34.329 ************************************ 00:08:34.329 00:08:34.329 real 0m5.157s 00:08:34.329 user 0m7.395s 00:08:34.329 sys 0m0.842s 00:08:34.329 12:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.329 12:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.329 12:25:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:34.329 12:25:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:34.329 12:25:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.329 12:25:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.329 ************************************ 00:08:34.329 START TEST raid_superblock_test 00:08:34.329 ************************************ 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62105 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62105 00:08:34.329 12:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62105 ']' 00:08:34.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.330 12:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.330 12:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.330 12:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.330 12:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.330 12:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.330 [2024-09-30 12:25:46.167035] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:34.330 [2024-09-30 12:25:46.167252] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62105 ] 00:08:34.589 [2024-09-30 12:25:46.331906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.849 [2024-09-30 12:25:46.522944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.849 [2024-09-30 12:25:46.721238] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.849 [2024-09-30 12:25:46.721378] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.108 12:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.368 malloc1 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.368 [2024-09-30 12:25:47.031958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:35.368 [2024-09-30 12:25:47.032077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.368 [2024-09-30 12:25:47.032127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:35.368 [2024-09-30 12:25:47.032163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.368 [2024-09-30 12:25:47.034289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.368 [2024-09-30 12:25:47.034373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:35.368 pt1 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.368 malloc2 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.368 [2024-09-30 12:25:47.125033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:35.368 [2024-09-30 12:25:47.125156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.368 [2024-09-30 12:25:47.125201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:35.368 [2024-09-30 12:25:47.125256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.368 [2024-09-30 12:25:47.127315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.368 [2024-09-30 12:25:47.127394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:35.368 pt2 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.368 [2024-09-30 12:25:47.137083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:35.368 [2024-09-30 12:25:47.138920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:35.368 [2024-09-30 12:25:47.139156] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:35.368 [2024-09-30 12:25:47.139206] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:35.368 [2024-09-30 12:25:47.139477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:35.368 [2024-09-30 12:25:47.139665] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:35.368 [2024-09-30 12:25:47.139716] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:35.368 [2024-09-30 12:25:47.139947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.368 "name": "raid_bdev1", 00:08:35.368 "uuid": "e55bbb15-f157-4b07-a39e-cc504902abef", 00:08:35.368 "strip_size_kb": 64, 00:08:35.368 "state": "online", 00:08:35.368 "raid_level": "concat", 00:08:35.368 "superblock": true, 00:08:35.368 "num_base_bdevs": 2, 00:08:35.368 "num_base_bdevs_discovered": 2, 00:08:35.368 "num_base_bdevs_operational": 2, 00:08:35.368 "base_bdevs_list": [ 00:08:35.368 { 00:08:35.368 "name": "pt1", 00:08:35.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.368 "is_configured": true, 00:08:35.368 "data_offset": 2048, 00:08:35.368 "data_size": 63488 00:08:35.368 }, 00:08:35.368 { 00:08:35.368 "name": "pt2", 00:08:35.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.368 "is_configured": true, 00:08:35.368 "data_offset": 2048, 00:08:35.368 "data_size": 63488 00:08:35.368 } 00:08:35.368 ] 00:08:35.368 }' 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.368 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.936 [2024-09-30 12:25:47.572550] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.936 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.936 "name": "raid_bdev1", 00:08:35.936 "aliases": [ 00:08:35.936 "e55bbb15-f157-4b07-a39e-cc504902abef" 00:08:35.936 ], 00:08:35.936 "product_name": "Raid Volume", 00:08:35.936 "block_size": 512, 00:08:35.937 "num_blocks": 126976, 00:08:35.937 "uuid": "e55bbb15-f157-4b07-a39e-cc504902abef", 00:08:35.937 "assigned_rate_limits": { 00:08:35.937 "rw_ios_per_sec": 0, 00:08:35.937 "rw_mbytes_per_sec": 0, 00:08:35.937 "r_mbytes_per_sec": 0, 00:08:35.937 "w_mbytes_per_sec": 0 00:08:35.937 }, 00:08:35.937 "claimed": false, 00:08:35.937 "zoned": false, 00:08:35.937 "supported_io_types": { 00:08:35.937 "read": true, 00:08:35.937 "write": true, 00:08:35.937 "unmap": true, 00:08:35.937 "flush": true, 00:08:35.937 "reset": true, 00:08:35.937 "nvme_admin": false, 00:08:35.937 "nvme_io": false, 00:08:35.937 "nvme_io_md": false, 00:08:35.937 "write_zeroes": true, 00:08:35.937 "zcopy": false, 00:08:35.937 "get_zone_info": false, 00:08:35.937 "zone_management": false, 00:08:35.937 "zone_append": false, 00:08:35.937 "compare": false, 00:08:35.937 "compare_and_write": false, 00:08:35.937 "abort": false, 00:08:35.937 "seek_hole": false, 00:08:35.937 "seek_data": false, 00:08:35.937 "copy": false, 00:08:35.937 "nvme_iov_md": false 00:08:35.937 }, 00:08:35.937 "memory_domains": [ 00:08:35.937 { 00:08:35.937 "dma_device_id": "system", 00:08:35.937 "dma_device_type": 1 00:08:35.937 }, 00:08:35.937 { 00:08:35.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.937 "dma_device_type": 2 00:08:35.937 }, 00:08:35.937 { 00:08:35.937 "dma_device_id": "system", 00:08:35.937 "dma_device_type": 1 00:08:35.937 }, 00:08:35.937 { 00:08:35.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.937 "dma_device_type": 2 00:08:35.937 } 00:08:35.937 ], 00:08:35.937 "driver_specific": { 00:08:35.937 "raid": { 00:08:35.937 "uuid": "e55bbb15-f157-4b07-a39e-cc504902abef", 00:08:35.937 "strip_size_kb": 64, 00:08:35.937 "state": "online", 00:08:35.937 "raid_level": "concat", 00:08:35.937 "superblock": true, 00:08:35.937 "num_base_bdevs": 2, 00:08:35.937 "num_base_bdevs_discovered": 2, 00:08:35.937 "num_base_bdevs_operational": 2, 00:08:35.937 "base_bdevs_list": [ 00:08:35.937 { 00:08:35.937 "name": "pt1", 00:08:35.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.937 "is_configured": true, 00:08:35.937 "data_offset": 2048, 00:08:35.937 "data_size": 63488 00:08:35.937 }, 00:08:35.937 { 00:08:35.937 "name": "pt2", 00:08:35.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.937 "is_configured": true, 00:08:35.937 "data_offset": 2048, 00:08:35.937 "data_size": 63488 00:08:35.937 } 00:08:35.937 ] 00:08:35.937 } 00:08:35.937 } 00:08:35.937 }' 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:35.937 pt2' 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.937 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.937 [2024-09-30 12:25:47.824084] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e55bbb15-f157-4b07-a39e-cc504902abef 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e55bbb15-f157-4b07-a39e-cc504902abef ']' 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.197 [2024-09-30 12:25:47.867852] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.197 [2024-09-30 12:25:47.867922] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.197 [2024-09-30 12:25:47.868035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.197 [2024-09-30 12:25:47.868103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.197 [2024-09-30 12:25:47.868156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.197 12:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.198 [2024-09-30 12:25:48.007640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:36.198 [2024-09-30 12:25:48.009523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:36.198 [2024-09-30 12:25:48.009642] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:36.198 [2024-09-30 12:25:48.009763] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:36.198 [2024-09-30 12:25:48.009841] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.198 [2024-09-30 12:25:48.009882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:36.198 request: 00:08:36.198 { 00:08:36.198 "name": "raid_bdev1", 00:08:36.198 "raid_level": "concat", 00:08:36.198 "base_bdevs": [ 00:08:36.198 "malloc1", 00:08:36.198 "malloc2" 00:08:36.198 ], 00:08:36.198 "strip_size_kb": 64, 00:08:36.198 "superblock": false, 00:08:36.198 "method": "bdev_raid_create", 00:08:36.198 "req_id": 1 00:08:36.198 } 00:08:36.198 Got JSON-RPC error response 00:08:36.198 response: 00:08:36.198 { 00:08:36.198 "code": -17, 00:08:36.198 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:36.198 } 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.198 [2024-09-30 12:25:48.071535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:36.198 [2024-09-30 12:25:48.071634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.198 [2024-09-30 12:25:48.071675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:36.198 [2024-09-30 12:25:48.071717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.198 [2024-09-30 12:25:48.073928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.198 [2024-09-30 12:25:48.074023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:36.198 [2024-09-30 12:25:48.074151] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:36.198 [2024-09-30 12:25:48.074258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:36.198 pt1 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.198 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.457 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.457 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.457 "name": "raid_bdev1", 00:08:36.457 "uuid": "e55bbb15-f157-4b07-a39e-cc504902abef", 00:08:36.457 "strip_size_kb": 64, 00:08:36.457 "state": "configuring", 00:08:36.457 "raid_level": "concat", 00:08:36.457 "superblock": true, 00:08:36.457 "num_base_bdevs": 2, 00:08:36.457 "num_base_bdevs_discovered": 1, 00:08:36.457 "num_base_bdevs_operational": 2, 00:08:36.457 "base_bdevs_list": [ 00:08:36.457 { 00:08:36.457 "name": "pt1", 00:08:36.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.457 "is_configured": true, 00:08:36.457 "data_offset": 2048, 00:08:36.457 "data_size": 63488 00:08:36.457 }, 00:08:36.457 { 00:08:36.457 "name": null, 00:08:36.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.457 "is_configured": false, 00:08:36.457 "data_offset": 2048, 00:08:36.457 "data_size": 63488 00:08:36.457 } 00:08:36.457 ] 00:08:36.457 }' 00:08:36.457 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.457 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.717 [2024-09-30 12:25:48.498842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:36.717 [2024-09-30 12:25:48.498984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.717 [2024-09-30 12:25:48.499030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:36.717 [2024-09-30 12:25:48.499069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.717 [2024-09-30 12:25:48.499583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.717 [2024-09-30 12:25:48.499658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:36.717 [2024-09-30 12:25:48.499792] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:36.717 [2024-09-30 12:25:48.499857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:36.717 [2024-09-30 12:25:48.500032] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:36.717 [2024-09-30 12:25:48.500080] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:36.717 [2024-09-30 12:25:48.500343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:36.717 [2024-09-30 12:25:48.500540] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:36.717 [2024-09-30 12:25:48.500587] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:36.717 [2024-09-30 12:25:48.500787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.717 pt2 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.717 "name": "raid_bdev1", 00:08:36.717 "uuid": "e55bbb15-f157-4b07-a39e-cc504902abef", 00:08:36.717 "strip_size_kb": 64, 00:08:36.717 "state": "online", 00:08:36.717 "raid_level": "concat", 00:08:36.717 "superblock": true, 00:08:36.717 "num_base_bdevs": 2, 00:08:36.717 "num_base_bdevs_discovered": 2, 00:08:36.717 "num_base_bdevs_operational": 2, 00:08:36.717 "base_bdevs_list": [ 00:08:36.717 { 00:08:36.717 "name": "pt1", 00:08:36.717 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.717 "is_configured": true, 00:08:36.717 "data_offset": 2048, 00:08:36.717 "data_size": 63488 00:08:36.717 }, 00:08:36.717 { 00:08:36.717 "name": "pt2", 00:08:36.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.717 "is_configured": true, 00:08:36.717 "data_offset": 2048, 00:08:36.717 "data_size": 63488 00:08:36.717 } 00:08:36.717 ] 00:08:36.717 }' 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.717 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.285 [2024-09-30 12:25:48.934363] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.285 "name": "raid_bdev1", 00:08:37.285 "aliases": [ 00:08:37.285 "e55bbb15-f157-4b07-a39e-cc504902abef" 00:08:37.285 ], 00:08:37.285 "product_name": "Raid Volume", 00:08:37.285 "block_size": 512, 00:08:37.285 "num_blocks": 126976, 00:08:37.285 "uuid": "e55bbb15-f157-4b07-a39e-cc504902abef", 00:08:37.285 "assigned_rate_limits": { 00:08:37.285 "rw_ios_per_sec": 0, 00:08:37.285 "rw_mbytes_per_sec": 0, 00:08:37.285 "r_mbytes_per_sec": 0, 00:08:37.285 "w_mbytes_per_sec": 0 00:08:37.285 }, 00:08:37.285 "claimed": false, 00:08:37.285 "zoned": false, 00:08:37.285 "supported_io_types": { 00:08:37.285 "read": true, 00:08:37.285 "write": true, 00:08:37.285 "unmap": true, 00:08:37.285 "flush": true, 00:08:37.285 "reset": true, 00:08:37.285 "nvme_admin": false, 00:08:37.285 "nvme_io": false, 00:08:37.285 "nvme_io_md": false, 00:08:37.285 "write_zeroes": true, 00:08:37.285 "zcopy": false, 00:08:37.285 "get_zone_info": false, 00:08:37.285 "zone_management": false, 00:08:37.285 "zone_append": false, 00:08:37.285 "compare": false, 00:08:37.285 "compare_and_write": false, 00:08:37.285 "abort": false, 00:08:37.285 "seek_hole": false, 00:08:37.285 "seek_data": false, 00:08:37.285 "copy": false, 00:08:37.285 "nvme_iov_md": false 00:08:37.285 }, 00:08:37.285 "memory_domains": [ 00:08:37.285 { 00:08:37.285 "dma_device_id": "system", 00:08:37.285 "dma_device_type": 1 00:08:37.285 }, 00:08:37.285 { 00:08:37.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.285 "dma_device_type": 2 00:08:37.285 }, 00:08:37.285 { 00:08:37.285 "dma_device_id": "system", 00:08:37.285 "dma_device_type": 1 00:08:37.285 }, 00:08:37.285 { 00:08:37.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.285 "dma_device_type": 2 00:08:37.285 } 00:08:37.285 ], 00:08:37.285 "driver_specific": { 00:08:37.285 "raid": { 00:08:37.285 "uuid": "e55bbb15-f157-4b07-a39e-cc504902abef", 00:08:37.285 "strip_size_kb": 64, 00:08:37.285 "state": "online", 00:08:37.285 "raid_level": "concat", 00:08:37.285 "superblock": true, 00:08:37.285 "num_base_bdevs": 2, 00:08:37.285 "num_base_bdevs_discovered": 2, 00:08:37.285 "num_base_bdevs_operational": 2, 00:08:37.285 "base_bdevs_list": [ 00:08:37.285 { 00:08:37.285 "name": "pt1", 00:08:37.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:37.285 "is_configured": true, 00:08:37.285 "data_offset": 2048, 00:08:37.285 "data_size": 63488 00:08:37.285 }, 00:08:37.285 { 00:08:37.285 "name": "pt2", 00:08:37.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.285 "is_configured": true, 00:08:37.285 "data_offset": 2048, 00:08:37.285 "data_size": 63488 00:08:37.285 } 00:08:37.285 ] 00:08:37.285 } 00:08:37.285 } 00:08:37.285 }' 00:08:37.285 12:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.285 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:37.285 pt2' 00:08:37.285 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.285 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.285 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.285 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:37.285 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.285 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.285 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.285 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.286 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.286 [2024-09-30 12:25:49.177888] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e55bbb15-f157-4b07-a39e-cc504902abef '!=' e55bbb15-f157-4b07-a39e-cc504902abef ']' 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62105 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62105 ']' 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62105 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62105 00:08:37.545 killing process with pid 62105 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62105' 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62105 00:08:37.545 [2024-09-30 12:25:49.243626] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.545 [2024-09-30 12:25:49.243720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.545 [2024-09-30 12:25:49.243785] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.545 [2024-09-30 12:25:49.243799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:37.545 12:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62105 00:08:37.804 [2024-09-30 12:25:49.441044] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.741 ************************************ 00:08:38.741 END TEST raid_superblock_test 00:08:38.741 ************************************ 00:08:38.741 12:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:38.741 00:08:38.741 real 0m4.540s 00:08:38.741 user 0m6.335s 00:08:38.741 sys 0m0.714s 00:08:38.741 12:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.741 12:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.001 12:25:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:39.001 12:25:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:39.001 12:25:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.001 12:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.001 ************************************ 00:08:39.001 START TEST raid_read_error_test 00:08:39.001 ************************************ 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3uSXCDFcig 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62317 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62317 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62317 ']' 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.001 12:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.001 [2024-09-30 12:25:50.789376] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:39.001 [2024-09-30 12:25:50.789577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62317 ] 00:08:39.260 [2024-09-30 12:25:50.952070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.260 [2024-09-30 12:25:51.147121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.520 [2024-09-30 12:25:51.345935] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.520 [2024-09-30 12:25:51.345970] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.780 BaseBdev1_malloc 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.780 true 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.780 [2024-09-30 12:25:51.660339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:39.780 [2024-09-30 12:25:51.660470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.780 [2024-09-30 12:25:51.660511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:39.780 [2024-09-30 12:25:51.660547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.780 [2024-09-30 12:25:51.662704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.780 [2024-09-30 12:25:51.662807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:39.780 BaseBdev1 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.780 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.040 BaseBdev2_malloc 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.040 true 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.040 [2024-09-30 12:25:51.741922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:40.040 [2024-09-30 12:25:51.742057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.040 [2024-09-30 12:25:51.742098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:40.040 [2024-09-30 12:25:51.742148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.040 [2024-09-30 12:25:51.744299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.040 [2024-09-30 12:25:51.744392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:40.040 BaseBdev2 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.040 [2024-09-30 12:25:51.753971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.040 [2024-09-30 12:25:51.755836] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.040 [2024-09-30 12:25:51.756099] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:40.040 [2024-09-30 12:25:51.756156] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:40.040 [2024-09-30 12:25:51.756417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:40.040 [2024-09-30 12:25:51.756641] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:40.040 [2024-09-30 12:25:51.756690] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:40.040 [2024-09-30 12:25:51.756914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.040 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.040 "name": "raid_bdev1", 00:08:40.040 "uuid": "bca66297-9be3-4c3d-b5e2-02d4bc0aede6", 00:08:40.040 "strip_size_kb": 64, 00:08:40.040 "state": "online", 00:08:40.040 "raid_level": "concat", 00:08:40.040 "superblock": true, 00:08:40.040 "num_base_bdevs": 2, 00:08:40.040 "num_base_bdevs_discovered": 2, 00:08:40.040 "num_base_bdevs_operational": 2, 00:08:40.040 "base_bdevs_list": [ 00:08:40.040 { 00:08:40.040 "name": "BaseBdev1", 00:08:40.040 "uuid": "54761d87-47af-5017-afb2-6ffd465ba56c", 00:08:40.040 "is_configured": true, 00:08:40.040 "data_offset": 2048, 00:08:40.040 "data_size": 63488 00:08:40.040 }, 00:08:40.040 { 00:08:40.040 "name": "BaseBdev2", 00:08:40.040 "uuid": "96ac6484-bb9c-5f23-b3d4-9c9e43fd8bdf", 00:08:40.040 "is_configured": true, 00:08:40.040 "data_offset": 2048, 00:08:40.040 "data_size": 63488 00:08:40.041 } 00:08:40.041 ] 00:08:40.041 }' 00:08:40.041 12:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.041 12:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.606 12:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:40.606 12:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:40.606 [2024-09-30 12:25:52.302163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:41.539 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:41.539 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.539 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.539 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.539 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:41.539 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.540 "name": "raid_bdev1", 00:08:41.540 "uuid": "bca66297-9be3-4c3d-b5e2-02d4bc0aede6", 00:08:41.540 "strip_size_kb": 64, 00:08:41.540 "state": "online", 00:08:41.540 "raid_level": "concat", 00:08:41.540 "superblock": true, 00:08:41.540 "num_base_bdevs": 2, 00:08:41.540 "num_base_bdevs_discovered": 2, 00:08:41.540 "num_base_bdevs_operational": 2, 00:08:41.540 "base_bdevs_list": [ 00:08:41.540 { 00:08:41.540 "name": "BaseBdev1", 00:08:41.540 "uuid": "54761d87-47af-5017-afb2-6ffd465ba56c", 00:08:41.540 "is_configured": true, 00:08:41.540 "data_offset": 2048, 00:08:41.540 "data_size": 63488 00:08:41.540 }, 00:08:41.540 { 00:08:41.540 "name": "BaseBdev2", 00:08:41.540 "uuid": "96ac6484-bb9c-5f23-b3d4-9c9e43fd8bdf", 00:08:41.540 "is_configured": true, 00:08:41.540 "data_offset": 2048, 00:08:41.540 "data_size": 63488 00:08:41.540 } 00:08:41.540 ] 00:08:41.540 }' 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.540 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.798 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.798 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.798 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.798 [2024-09-30 12:25:53.657781] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.798 [2024-09-30 12:25:53.657880] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.798 [2024-09-30 12:25:53.660673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.798 [2024-09-30 12:25:53.660779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.798 [2024-09-30 12:25:53.660838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.798 [2024-09-30 12:25:53.660907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:41.798 { 00:08:41.798 "results": [ 00:08:41.798 { 00:08:41.798 "job": "raid_bdev1", 00:08:41.798 "core_mask": "0x1", 00:08:41.798 "workload": "randrw", 00:08:41.798 "percentage": 50, 00:08:41.798 "status": "finished", 00:08:41.798 "queue_depth": 1, 00:08:41.798 "io_size": 131072, 00:08:41.798 "runtime": 1.356601, 00:08:41.798 "iops": 16472.05036705708, 00:08:41.798 "mibps": 2059.006295882135, 00:08:41.798 "io_failed": 1, 00:08:41.799 "io_timeout": 0, 00:08:41.799 "avg_latency_us": 84.18348357379428, 00:08:41.799 "min_latency_us": 25.152838427947597, 00:08:41.799 "max_latency_us": 1380.8349344978167 00:08:41.799 } 00:08:41.799 ], 00:08:41.799 "core_count": 1 00:08:41.799 } 00:08:41.799 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.799 12:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62317 00:08:41.799 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62317 ']' 00:08:41.799 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62317 00:08:41.799 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:41.799 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.799 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62317 00:08:42.057 killing process with pid 62317 00:08:42.057 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.057 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.057 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62317' 00:08:42.057 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62317 00:08:42.057 [2024-09-30 12:25:53.706704] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.057 12:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62317 00:08:42.057 [2024-09-30 12:25:53.834440] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.438 12:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3uSXCDFcig 00:08:43.438 12:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:43.438 12:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:43.438 12:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:43.438 12:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:43.438 12:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.439 12:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:43.439 ************************************ 00:08:43.439 END TEST raid_read_error_test 00:08:43.439 ************************************ 00:08:43.439 12:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:43.439 00:08:43.439 real 0m4.412s 00:08:43.439 user 0m5.225s 00:08:43.439 sys 0m0.543s 00:08:43.439 12:25:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.439 12:25:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.439 12:25:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:43.439 12:25:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:43.439 12:25:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.439 12:25:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.439 ************************************ 00:08:43.439 START TEST raid_write_error_test 00:08:43.439 ************************************ 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KIubeaTFWA 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62458 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62458 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62458 ']' 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.439 12:25:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.439 [2024-09-30 12:25:55.285384] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:43.439 [2024-09-30 12:25:55.285625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62458 ] 00:08:43.722 [2024-09-30 12:25:55.453575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.995 [2024-09-30 12:25:55.661995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.995 [2024-09-30 12:25:55.855101] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.995 [2024-09-30 12:25:55.855146] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.253 BaseBdev1_malloc 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.253 true 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.253 [2024-09-30 12:25:56.141540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:44.253 [2024-09-30 12:25:56.141600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.253 [2024-09-30 12:25:56.141619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:44.253 [2024-09-30 12:25:56.141631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.253 [2024-09-30 12:25:56.143780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.253 [2024-09-30 12:25:56.143872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:44.253 BaseBdev1 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.253 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.512 BaseBdev2_malloc 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.512 true 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.512 [2024-09-30 12:25:56.220156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:44.512 [2024-09-30 12:25:56.220216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.512 [2024-09-30 12:25:56.220234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:44.512 [2024-09-30 12:25:56.220247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.512 [2024-09-30 12:25:56.222322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.512 [2024-09-30 12:25:56.222460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:44.512 BaseBdev2 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.512 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.512 [2024-09-30 12:25:56.232204] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.512 [2024-09-30 12:25:56.234049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.512 [2024-09-30 12:25:56.234240] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:44.512 [2024-09-30 12:25:56.234257] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:44.512 [2024-09-30 12:25:56.234481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:44.512 [2024-09-30 12:25:56.234638] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:44.513 [2024-09-30 12:25:56.234648] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:44.513 [2024-09-30 12:25:56.234842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.513 "name": "raid_bdev1", 00:08:44.513 "uuid": "c65cfc49-a46b-418c-884c-25a3c4744682", 00:08:44.513 "strip_size_kb": 64, 00:08:44.513 "state": "online", 00:08:44.513 "raid_level": "concat", 00:08:44.513 "superblock": true, 00:08:44.513 "num_base_bdevs": 2, 00:08:44.513 "num_base_bdevs_discovered": 2, 00:08:44.513 "num_base_bdevs_operational": 2, 00:08:44.513 "base_bdevs_list": [ 00:08:44.513 { 00:08:44.513 "name": "BaseBdev1", 00:08:44.513 "uuid": "d43cca24-3d42-5065-a77f-e3bfe1c44479", 00:08:44.513 "is_configured": true, 00:08:44.513 "data_offset": 2048, 00:08:44.513 "data_size": 63488 00:08:44.513 }, 00:08:44.513 { 00:08:44.513 "name": "BaseBdev2", 00:08:44.513 "uuid": "5fef8726-ea2f-5ea6-8d7a-cee28a8d29e1", 00:08:44.513 "is_configured": true, 00:08:44.513 "data_offset": 2048, 00:08:44.513 "data_size": 63488 00:08:44.513 } 00:08:44.513 ] 00:08:44.513 }' 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.513 12:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.770 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:44.770 12:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:45.029 [2024-09-30 12:25:56.748603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.967 "name": "raid_bdev1", 00:08:45.967 "uuid": "c65cfc49-a46b-418c-884c-25a3c4744682", 00:08:45.967 "strip_size_kb": 64, 00:08:45.967 "state": "online", 00:08:45.967 "raid_level": "concat", 00:08:45.967 "superblock": true, 00:08:45.967 "num_base_bdevs": 2, 00:08:45.967 "num_base_bdevs_discovered": 2, 00:08:45.967 "num_base_bdevs_operational": 2, 00:08:45.967 "base_bdevs_list": [ 00:08:45.967 { 00:08:45.967 "name": "BaseBdev1", 00:08:45.967 "uuid": "d43cca24-3d42-5065-a77f-e3bfe1c44479", 00:08:45.967 "is_configured": true, 00:08:45.967 "data_offset": 2048, 00:08:45.967 "data_size": 63488 00:08:45.967 }, 00:08:45.967 { 00:08:45.967 "name": "BaseBdev2", 00:08:45.967 "uuid": "5fef8726-ea2f-5ea6-8d7a-cee28a8d29e1", 00:08:45.967 "is_configured": true, 00:08:45.967 "data_offset": 2048, 00:08:45.967 "data_size": 63488 00:08:45.967 } 00:08:45.967 ] 00:08:45.967 }' 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.967 12:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.226 12:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.226 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.226 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.226 [2024-09-30 12:25:58.084011] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.226 [2024-09-30 12:25:58.084128] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.226 [2024-09-30 12:25:58.086686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.226 [2024-09-30 12:25:58.086826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.226 [2024-09-30 12:25:58.086885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.226 [2024-09-30 12:25:58.086947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:46.226 { 00:08:46.226 "results": [ 00:08:46.226 { 00:08:46.226 "job": "raid_bdev1", 00:08:46.226 "core_mask": "0x1", 00:08:46.226 "workload": "randrw", 00:08:46.226 "percentage": 50, 00:08:46.226 "status": "finished", 00:08:46.226 "queue_depth": 1, 00:08:46.226 "io_size": 131072, 00:08:46.226 "runtime": 1.336317, 00:08:46.226 "iops": 16792.422755977812, 00:08:46.227 "mibps": 2099.0528444972265, 00:08:46.227 "io_failed": 1, 00:08:46.227 "io_timeout": 0, 00:08:46.227 "avg_latency_us": 82.48126687953604, 00:08:46.227 "min_latency_us": 25.041048034934498, 00:08:46.227 "max_latency_us": 1373.6803493449781 00:08:46.227 } 00:08:46.227 ], 00:08:46.227 "core_count": 1 00:08:46.227 } 00:08:46.227 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.227 12:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62458 00:08:46.227 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62458 ']' 00:08:46.227 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62458 00:08:46.227 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:46.227 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.227 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62458 00:08:46.486 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.486 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.486 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62458' 00:08:46.486 killing process with pid 62458 00:08:46.486 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62458 00:08:46.486 [2024-09-30 12:25:58.135554] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.486 12:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62458 00:08:46.486 [2024-09-30 12:25:58.270007] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.865 12:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KIubeaTFWA 00:08:47.865 12:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:47.865 12:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:47.865 12:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:47.865 12:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:47.865 ************************************ 00:08:47.865 END TEST raid_write_error_test 00:08:47.865 ************************************ 00:08:47.865 12:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.865 12:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:47.865 12:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:47.865 00:08:47.865 real 0m4.338s 00:08:47.865 user 0m5.070s 00:08:47.865 sys 0m0.579s 00:08:47.865 12:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.865 12:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.865 12:25:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:47.865 12:25:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:47.865 12:25:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:47.865 12:25:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.865 12:25:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.865 ************************************ 00:08:47.865 START TEST raid_state_function_test 00:08:47.865 ************************************ 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62596 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62596' 00:08:47.865 Process raid pid: 62596 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62596 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62596 ']' 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.865 12:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.865 [2024-09-30 12:25:59.687027] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:47.865 [2024-09-30 12:25:59.687256] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.125 [2024-09-30 12:25:59.855613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.384 [2024-09-30 12:26:00.053951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.384 [2024-09-30 12:26:00.258632] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.384 [2024-09-30 12:26:00.258774] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.647 [2024-09-30 12:26:00.498067] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.647 [2024-09-30 12:26:00.498126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.647 [2024-09-30 12:26:00.498137] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.647 [2024-09-30 12:26:00.498149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.647 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.905 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.905 "name": "Existed_Raid", 00:08:48.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.905 "strip_size_kb": 0, 00:08:48.905 "state": "configuring", 00:08:48.905 "raid_level": "raid1", 00:08:48.905 "superblock": false, 00:08:48.905 "num_base_bdevs": 2, 00:08:48.905 "num_base_bdevs_discovered": 0, 00:08:48.905 "num_base_bdevs_operational": 2, 00:08:48.905 "base_bdevs_list": [ 00:08:48.905 { 00:08:48.905 "name": "BaseBdev1", 00:08:48.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.905 "is_configured": false, 00:08:48.905 "data_offset": 0, 00:08:48.905 "data_size": 0 00:08:48.905 }, 00:08:48.905 { 00:08:48.905 "name": "BaseBdev2", 00:08:48.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.905 "is_configured": false, 00:08:48.905 "data_offset": 0, 00:08:48.905 "data_size": 0 00:08:48.905 } 00:08:48.905 ] 00:08:48.905 }' 00:08:48.905 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.905 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.163 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.163 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.163 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.163 [2024-09-30 12:26:00.941245] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.163 [2024-09-30 12:26:00.941350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:49.163 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.163 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:49.163 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.163 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.163 [2024-09-30 12:26:00.949249] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.163 [2024-09-30 12:26:00.949355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.163 [2024-09-30 12:26:00.949389] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.163 [2024-09-30 12:26:00.949418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.163 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.163 12:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.163 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.163 12:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.163 [2024-09-30 12:26:01.026097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.163 BaseBdev1 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.163 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.163 [ 00:08:49.163 { 00:08:49.163 "name": "BaseBdev1", 00:08:49.163 "aliases": [ 00:08:49.163 "41c2f090-492b-489c-8941-a6ddf96918e2" 00:08:49.163 ], 00:08:49.163 "product_name": "Malloc disk", 00:08:49.163 "block_size": 512, 00:08:49.163 "num_blocks": 65536, 00:08:49.163 "uuid": "41c2f090-492b-489c-8941-a6ddf96918e2", 00:08:49.163 "assigned_rate_limits": { 00:08:49.163 "rw_ios_per_sec": 0, 00:08:49.163 "rw_mbytes_per_sec": 0, 00:08:49.163 "r_mbytes_per_sec": 0, 00:08:49.163 "w_mbytes_per_sec": 0 00:08:49.163 }, 00:08:49.163 "claimed": true, 00:08:49.163 "claim_type": "exclusive_write", 00:08:49.163 "zoned": false, 00:08:49.163 "supported_io_types": { 00:08:49.163 "read": true, 00:08:49.163 "write": true, 00:08:49.163 "unmap": true, 00:08:49.163 "flush": true, 00:08:49.163 "reset": true, 00:08:49.163 "nvme_admin": false, 00:08:49.163 "nvme_io": false, 00:08:49.163 "nvme_io_md": false, 00:08:49.163 "write_zeroes": true, 00:08:49.163 "zcopy": true, 00:08:49.163 "get_zone_info": false, 00:08:49.163 "zone_management": false, 00:08:49.163 "zone_append": false, 00:08:49.163 "compare": false, 00:08:49.163 "compare_and_write": false, 00:08:49.421 "abort": true, 00:08:49.421 "seek_hole": false, 00:08:49.421 "seek_data": false, 00:08:49.421 "copy": true, 00:08:49.421 "nvme_iov_md": false 00:08:49.421 }, 00:08:49.421 "memory_domains": [ 00:08:49.421 { 00:08:49.421 "dma_device_id": "system", 00:08:49.421 "dma_device_type": 1 00:08:49.421 }, 00:08:49.421 { 00:08:49.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.421 "dma_device_type": 2 00:08:49.421 } 00:08:49.421 ], 00:08:49.421 "driver_specific": {} 00:08:49.421 } 00:08:49.421 ] 00:08:49.421 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.421 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:49.421 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:49.421 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.421 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.421 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.421 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.422 "name": "Existed_Raid", 00:08:49.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.422 "strip_size_kb": 0, 00:08:49.422 "state": "configuring", 00:08:49.422 "raid_level": "raid1", 00:08:49.422 "superblock": false, 00:08:49.422 "num_base_bdevs": 2, 00:08:49.422 "num_base_bdevs_discovered": 1, 00:08:49.422 "num_base_bdevs_operational": 2, 00:08:49.422 "base_bdevs_list": [ 00:08:49.422 { 00:08:49.422 "name": "BaseBdev1", 00:08:49.422 "uuid": "41c2f090-492b-489c-8941-a6ddf96918e2", 00:08:49.422 "is_configured": true, 00:08:49.422 "data_offset": 0, 00:08:49.422 "data_size": 65536 00:08:49.422 }, 00:08:49.422 { 00:08:49.422 "name": "BaseBdev2", 00:08:49.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.422 "is_configured": false, 00:08:49.422 "data_offset": 0, 00:08:49.422 "data_size": 0 00:08:49.422 } 00:08:49.422 ] 00:08:49.422 }' 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.422 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.680 [2024-09-30 12:26:01.505362] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.680 [2024-09-30 12:26:01.505480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.680 [2024-09-30 12:26:01.517368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.680 [2024-09-30 12:26:01.519187] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.680 [2024-09-30 12:26:01.519238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.680 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.681 "name": "Existed_Raid", 00:08:49.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.681 "strip_size_kb": 0, 00:08:49.681 "state": "configuring", 00:08:49.681 "raid_level": "raid1", 00:08:49.681 "superblock": false, 00:08:49.681 "num_base_bdevs": 2, 00:08:49.681 "num_base_bdevs_discovered": 1, 00:08:49.681 "num_base_bdevs_operational": 2, 00:08:49.681 "base_bdevs_list": [ 00:08:49.681 { 00:08:49.681 "name": "BaseBdev1", 00:08:49.681 "uuid": "41c2f090-492b-489c-8941-a6ddf96918e2", 00:08:49.681 "is_configured": true, 00:08:49.681 "data_offset": 0, 00:08:49.681 "data_size": 65536 00:08:49.681 }, 00:08:49.681 { 00:08:49.681 "name": "BaseBdev2", 00:08:49.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.681 "is_configured": false, 00:08:49.681 "data_offset": 0, 00:08:49.681 "data_size": 0 00:08:49.681 } 00:08:49.681 ] 00:08:49.681 }' 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.681 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.249 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.249 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.249 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.249 [2024-09-30 12:26:01.993440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.249 [2024-09-30 12:26:01.993563] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:50.250 [2024-09-30 12:26:01.993593] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:50.250 [2024-09-30 12:26:01.993944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:50.250 [2024-09-30 12:26:01.994179] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:50.250 [2024-09-30 12:26:01.994234] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:50.250 [2024-09-30 12:26:01.994549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.250 BaseBdev2 00:08:50.250 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.250 12:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:50.250 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:50.250 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:50.250 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:50.250 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:50.250 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:50.250 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:50.250 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.250 12:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.250 [ 00:08:50.250 { 00:08:50.250 "name": "BaseBdev2", 00:08:50.250 "aliases": [ 00:08:50.250 "77a8e93a-b39a-4041-9e0f-de659c935791" 00:08:50.250 ], 00:08:50.250 "product_name": "Malloc disk", 00:08:50.250 "block_size": 512, 00:08:50.250 "num_blocks": 65536, 00:08:50.250 "uuid": "77a8e93a-b39a-4041-9e0f-de659c935791", 00:08:50.250 "assigned_rate_limits": { 00:08:50.250 "rw_ios_per_sec": 0, 00:08:50.250 "rw_mbytes_per_sec": 0, 00:08:50.250 "r_mbytes_per_sec": 0, 00:08:50.250 "w_mbytes_per_sec": 0 00:08:50.250 }, 00:08:50.250 "claimed": true, 00:08:50.250 "claim_type": "exclusive_write", 00:08:50.250 "zoned": false, 00:08:50.250 "supported_io_types": { 00:08:50.250 "read": true, 00:08:50.250 "write": true, 00:08:50.250 "unmap": true, 00:08:50.250 "flush": true, 00:08:50.250 "reset": true, 00:08:50.250 "nvme_admin": false, 00:08:50.250 "nvme_io": false, 00:08:50.250 "nvme_io_md": false, 00:08:50.250 "write_zeroes": true, 00:08:50.250 "zcopy": true, 00:08:50.250 "get_zone_info": false, 00:08:50.250 "zone_management": false, 00:08:50.250 "zone_append": false, 00:08:50.250 "compare": false, 00:08:50.250 "compare_and_write": false, 00:08:50.250 "abort": true, 00:08:50.250 "seek_hole": false, 00:08:50.250 "seek_data": false, 00:08:50.250 "copy": true, 00:08:50.250 "nvme_iov_md": false 00:08:50.250 }, 00:08:50.250 "memory_domains": [ 00:08:50.250 { 00:08:50.250 "dma_device_id": "system", 00:08:50.250 "dma_device_type": 1 00:08:50.250 }, 00:08:50.250 { 00:08:50.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.250 "dma_device_type": 2 00:08:50.250 } 00:08:50.250 ], 00:08:50.250 "driver_specific": {} 00:08:50.250 } 00:08:50.250 ] 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.250 "name": "Existed_Raid", 00:08:50.250 "uuid": "a39dd942-63aa-4428-acb1-1f25fdcbd825", 00:08:50.250 "strip_size_kb": 0, 00:08:50.250 "state": "online", 00:08:50.250 "raid_level": "raid1", 00:08:50.250 "superblock": false, 00:08:50.250 "num_base_bdevs": 2, 00:08:50.250 "num_base_bdevs_discovered": 2, 00:08:50.250 "num_base_bdevs_operational": 2, 00:08:50.250 "base_bdevs_list": [ 00:08:50.250 { 00:08:50.250 "name": "BaseBdev1", 00:08:50.250 "uuid": "41c2f090-492b-489c-8941-a6ddf96918e2", 00:08:50.250 "is_configured": true, 00:08:50.250 "data_offset": 0, 00:08:50.250 "data_size": 65536 00:08:50.250 }, 00:08:50.250 { 00:08:50.250 "name": "BaseBdev2", 00:08:50.250 "uuid": "77a8e93a-b39a-4041-9e0f-de659c935791", 00:08:50.250 "is_configured": true, 00:08:50.250 "data_offset": 0, 00:08:50.250 "data_size": 65536 00:08:50.250 } 00:08:50.250 ] 00:08:50.250 }' 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.250 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.820 [2024-09-30 12:26:02.520860] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.820 "name": "Existed_Raid", 00:08:50.820 "aliases": [ 00:08:50.820 "a39dd942-63aa-4428-acb1-1f25fdcbd825" 00:08:50.820 ], 00:08:50.820 "product_name": "Raid Volume", 00:08:50.820 "block_size": 512, 00:08:50.820 "num_blocks": 65536, 00:08:50.820 "uuid": "a39dd942-63aa-4428-acb1-1f25fdcbd825", 00:08:50.820 "assigned_rate_limits": { 00:08:50.820 "rw_ios_per_sec": 0, 00:08:50.820 "rw_mbytes_per_sec": 0, 00:08:50.820 "r_mbytes_per_sec": 0, 00:08:50.820 "w_mbytes_per_sec": 0 00:08:50.820 }, 00:08:50.820 "claimed": false, 00:08:50.820 "zoned": false, 00:08:50.820 "supported_io_types": { 00:08:50.820 "read": true, 00:08:50.820 "write": true, 00:08:50.820 "unmap": false, 00:08:50.820 "flush": false, 00:08:50.820 "reset": true, 00:08:50.820 "nvme_admin": false, 00:08:50.820 "nvme_io": false, 00:08:50.820 "nvme_io_md": false, 00:08:50.820 "write_zeroes": true, 00:08:50.820 "zcopy": false, 00:08:50.820 "get_zone_info": false, 00:08:50.820 "zone_management": false, 00:08:50.820 "zone_append": false, 00:08:50.820 "compare": false, 00:08:50.820 "compare_and_write": false, 00:08:50.820 "abort": false, 00:08:50.820 "seek_hole": false, 00:08:50.820 "seek_data": false, 00:08:50.820 "copy": false, 00:08:50.820 "nvme_iov_md": false 00:08:50.820 }, 00:08:50.820 "memory_domains": [ 00:08:50.820 { 00:08:50.820 "dma_device_id": "system", 00:08:50.820 "dma_device_type": 1 00:08:50.820 }, 00:08:50.820 { 00:08:50.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.820 "dma_device_type": 2 00:08:50.820 }, 00:08:50.820 { 00:08:50.820 "dma_device_id": "system", 00:08:50.820 "dma_device_type": 1 00:08:50.820 }, 00:08:50.820 { 00:08:50.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.820 "dma_device_type": 2 00:08:50.820 } 00:08:50.820 ], 00:08:50.820 "driver_specific": { 00:08:50.820 "raid": { 00:08:50.820 "uuid": "a39dd942-63aa-4428-acb1-1f25fdcbd825", 00:08:50.820 "strip_size_kb": 0, 00:08:50.820 "state": "online", 00:08:50.820 "raid_level": "raid1", 00:08:50.820 "superblock": false, 00:08:50.820 "num_base_bdevs": 2, 00:08:50.820 "num_base_bdevs_discovered": 2, 00:08:50.820 "num_base_bdevs_operational": 2, 00:08:50.820 "base_bdevs_list": [ 00:08:50.820 { 00:08:50.820 "name": "BaseBdev1", 00:08:50.820 "uuid": "41c2f090-492b-489c-8941-a6ddf96918e2", 00:08:50.820 "is_configured": true, 00:08:50.820 "data_offset": 0, 00:08:50.820 "data_size": 65536 00:08:50.820 }, 00:08:50.820 { 00:08:50.820 "name": "BaseBdev2", 00:08:50.820 "uuid": "77a8e93a-b39a-4041-9e0f-de659c935791", 00:08:50.820 "is_configured": true, 00:08:50.820 "data_offset": 0, 00:08:50.820 "data_size": 65536 00:08:50.820 } 00:08:50.820 ] 00:08:50.820 } 00:08:50.820 } 00:08:50.820 }' 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:50.820 BaseBdev2' 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.820 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.080 [2024-09-30 12:26:02.748218] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.080 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.080 "name": "Existed_Raid", 00:08:51.080 "uuid": "a39dd942-63aa-4428-acb1-1f25fdcbd825", 00:08:51.080 "strip_size_kb": 0, 00:08:51.080 "state": "online", 00:08:51.081 "raid_level": "raid1", 00:08:51.081 "superblock": false, 00:08:51.081 "num_base_bdevs": 2, 00:08:51.081 "num_base_bdevs_discovered": 1, 00:08:51.081 "num_base_bdevs_operational": 1, 00:08:51.081 "base_bdevs_list": [ 00:08:51.081 { 00:08:51.081 "name": null, 00:08:51.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.081 "is_configured": false, 00:08:51.081 "data_offset": 0, 00:08:51.081 "data_size": 65536 00:08:51.081 }, 00:08:51.081 { 00:08:51.081 "name": "BaseBdev2", 00:08:51.081 "uuid": "77a8e93a-b39a-4041-9e0f-de659c935791", 00:08:51.081 "is_configured": true, 00:08:51.081 "data_offset": 0, 00:08:51.081 "data_size": 65536 00:08:51.081 } 00:08:51.081 ] 00:08:51.081 }' 00:08:51.081 12:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.081 12:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.650 [2024-09-30 12:26:03.314582] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.650 [2024-09-30 12:26:03.314772] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.650 [2024-09-30 12:26:03.408168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.650 [2024-09-30 12:26:03.408308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.650 [2024-09-30 12:26:03.408358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:51.650 12:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62596 00:08:51.651 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62596 ']' 00:08:51.651 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62596 00:08:51.651 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:51.651 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.651 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62596 00:08:51.651 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.651 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.651 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62596' 00:08:51.651 killing process with pid 62596 00:08:51.651 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62596 00:08:51.651 [2024-09-30 12:26:03.507659] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.651 12:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62596 00:08:51.651 [2024-09-30 12:26:03.523380] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:53.029 00:08:53.029 real 0m5.130s 00:08:53.029 user 0m7.294s 00:08:53.029 sys 0m0.858s 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.029 ************************************ 00:08:53.029 END TEST raid_state_function_test 00:08:53.029 ************************************ 00:08:53.029 12:26:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:53.029 12:26:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:53.029 12:26:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.029 12:26:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.029 ************************************ 00:08:53.029 START TEST raid_state_function_test_sb 00:08:53.029 ************************************ 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62849 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62849' 00:08:53.029 Process raid pid: 62849 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62849 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62849 ']' 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.029 12:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.029 [2024-09-30 12:26:04.881794] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:53.029 [2024-09-30 12:26:04.881976] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.290 [2024-09-30 12:26:05.045844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.550 [2024-09-30 12:26:05.257774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.810 [2024-09-30 12:26:05.459177] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.810 [2024-09-30 12:26:05.459228] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.810 12:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.810 12:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:53.810 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:53.810 12:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.810 12:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.810 [2024-09-30 12:26:05.703656] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.810 [2024-09-30 12:26:05.703717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.810 [2024-09-30 12:26:05.703728] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.810 [2024-09-30 12:26:05.703750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.070 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.070 "name": "Existed_Raid", 00:08:54.070 "uuid": "23d7b4d5-46fc-442e-a3c4-cb64a8ab7b53", 00:08:54.070 "strip_size_kb": 0, 00:08:54.070 "state": "configuring", 00:08:54.071 "raid_level": "raid1", 00:08:54.071 "superblock": true, 00:08:54.071 "num_base_bdevs": 2, 00:08:54.071 "num_base_bdevs_discovered": 0, 00:08:54.071 "num_base_bdevs_operational": 2, 00:08:54.071 "base_bdevs_list": [ 00:08:54.071 { 00:08:54.071 "name": "BaseBdev1", 00:08:54.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.071 "is_configured": false, 00:08:54.071 "data_offset": 0, 00:08:54.071 "data_size": 0 00:08:54.071 }, 00:08:54.071 { 00:08:54.071 "name": "BaseBdev2", 00:08:54.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.071 "is_configured": false, 00:08:54.071 "data_offset": 0, 00:08:54.071 "data_size": 0 00:08:54.071 } 00:08:54.071 ] 00:08:54.071 }' 00:08:54.071 12:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.071 12:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.330 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.330 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.330 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.330 [2024-09-30 12:26:06.166875] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.330 [2024-09-30 12:26:06.166968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:54.330 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.330 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:54.330 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.330 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.330 [2024-09-30 12:26:06.178922] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.330 [2024-09-30 12:26:06.178978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.330 [2024-09-30 12:26:06.178989] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.330 [2024-09-30 12:26:06.179002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.330 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.330 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.330 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.330 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.591 [2024-09-30 12:26:06.260897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.591 BaseBdev1 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.591 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.591 [ 00:08:54.591 { 00:08:54.591 "name": "BaseBdev1", 00:08:54.591 "aliases": [ 00:08:54.591 "83f34936-6e89-4f9c-9d02-89293a07a3c4" 00:08:54.591 ], 00:08:54.591 "product_name": "Malloc disk", 00:08:54.591 "block_size": 512, 00:08:54.591 "num_blocks": 65536, 00:08:54.591 "uuid": "83f34936-6e89-4f9c-9d02-89293a07a3c4", 00:08:54.591 "assigned_rate_limits": { 00:08:54.591 "rw_ios_per_sec": 0, 00:08:54.591 "rw_mbytes_per_sec": 0, 00:08:54.591 "r_mbytes_per_sec": 0, 00:08:54.591 "w_mbytes_per_sec": 0 00:08:54.591 }, 00:08:54.591 "claimed": true, 00:08:54.591 "claim_type": "exclusive_write", 00:08:54.591 "zoned": false, 00:08:54.591 "supported_io_types": { 00:08:54.591 "read": true, 00:08:54.591 "write": true, 00:08:54.591 "unmap": true, 00:08:54.591 "flush": true, 00:08:54.591 "reset": true, 00:08:54.591 "nvme_admin": false, 00:08:54.591 "nvme_io": false, 00:08:54.591 "nvme_io_md": false, 00:08:54.591 "write_zeroes": true, 00:08:54.591 "zcopy": true, 00:08:54.591 "get_zone_info": false, 00:08:54.591 "zone_management": false, 00:08:54.591 "zone_append": false, 00:08:54.591 "compare": false, 00:08:54.591 "compare_and_write": false, 00:08:54.591 "abort": true, 00:08:54.591 "seek_hole": false, 00:08:54.591 "seek_data": false, 00:08:54.591 "copy": true, 00:08:54.592 "nvme_iov_md": false 00:08:54.592 }, 00:08:54.592 "memory_domains": [ 00:08:54.592 { 00:08:54.592 "dma_device_id": "system", 00:08:54.592 "dma_device_type": 1 00:08:54.592 }, 00:08:54.592 { 00:08:54.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.592 "dma_device_type": 2 00:08:54.592 } 00:08:54.592 ], 00:08:54.592 "driver_specific": {} 00:08:54.592 } 00:08:54.592 ] 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.592 "name": "Existed_Raid", 00:08:54.592 "uuid": "fdeb3a29-e595-49a0-aa52-46d427d025fd", 00:08:54.592 "strip_size_kb": 0, 00:08:54.592 "state": "configuring", 00:08:54.592 "raid_level": "raid1", 00:08:54.592 "superblock": true, 00:08:54.592 "num_base_bdevs": 2, 00:08:54.592 "num_base_bdevs_discovered": 1, 00:08:54.592 "num_base_bdevs_operational": 2, 00:08:54.592 "base_bdevs_list": [ 00:08:54.592 { 00:08:54.592 "name": "BaseBdev1", 00:08:54.592 "uuid": "83f34936-6e89-4f9c-9d02-89293a07a3c4", 00:08:54.592 "is_configured": true, 00:08:54.592 "data_offset": 2048, 00:08:54.592 "data_size": 63488 00:08:54.592 }, 00:08:54.592 { 00:08:54.592 "name": "BaseBdev2", 00:08:54.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.592 "is_configured": false, 00:08:54.592 "data_offset": 0, 00:08:54.592 "data_size": 0 00:08:54.592 } 00:08:54.592 ] 00:08:54.592 }' 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.592 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.852 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.852 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.852 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.852 [2024-09-30 12:26:06.740147] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.852 [2024-09-30 12:26:06.740197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:54.852 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.852 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:54.852 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.852 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.112 [2024-09-30 12:26:06.752166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.112 [2024-09-30 12:26:06.753985] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.112 [2024-09-30 12:26:06.754028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.112 "name": "Existed_Raid", 00:08:55.112 "uuid": "48b7f752-6a49-4de3-a000-f3e3a65b4439", 00:08:55.112 "strip_size_kb": 0, 00:08:55.112 "state": "configuring", 00:08:55.112 "raid_level": "raid1", 00:08:55.112 "superblock": true, 00:08:55.112 "num_base_bdevs": 2, 00:08:55.112 "num_base_bdevs_discovered": 1, 00:08:55.112 "num_base_bdevs_operational": 2, 00:08:55.112 "base_bdevs_list": [ 00:08:55.112 { 00:08:55.112 "name": "BaseBdev1", 00:08:55.112 "uuid": "83f34936-6e89-4f9c-9d02-89293a07a3c4", 00:08:55.112 "is_configured": true, 00:08:55.112 "data_offset": 2048, 00:08:55.112 "data_size": 63488 00:08:55.112 }, 00:08:55.112 { 00:08:55.112 "name": "BaseBdev2", 00:08:55.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.112 "is_configured": false, 00:08:55.112 "data_offset": 0, 00:08:55.112 "data_size": 0 00:08:55.112 } 00:08:55.112 ] 00:08:55.112 }' 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.112 12:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.372 [2024-09-30 12:26:07.238674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.372 [2024-09-30 12:26:07.239082] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:55.372 [2024-09-30 12:26:07.239145] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:55.372 [2024-09-30 12:26:07.239554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:55.372 BaseBdev2 00:08:55.372 [2024-09-30 12:26:07.239788] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:55.372 [2024-09-30 12:26:07.239843] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:55.372 [2024-09-30 12:26:07.240068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.372 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.373 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.373 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.373 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:55.373 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.373 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.373 [ 00:08:55.373 { 00:08:55.373 "name": "BaseBdev2", 00:08:55.373 "aliases": [ 00:08:55.373 "46eddaf4-3b47-4ddd-8867-0787a0a35eb5" 00:08:55.373 ], 00:08:55.373 "product_name": "Malloc disk", 00:08:55.373 "block_size": 512, 00:08:55.373 "num_blocks": 65536, 00:08:55.373 "uuid": "46eddaf4-3b47-4ddd-8867-0787a0a35eb5", 00:08:55.373 "assigned_rate_limits": { 00:08:55.373 "rw_ios_per_sec": 0, 00:08:55.373 "rw_mbytes_per_sec": 0, 00:08:55.373 "r_mbytes_per_sec": 0, 00:08:55.373 "w_mbytes_per_sec": 0 00:08:55.373 }, 00:08:55.373 "claimed": true, 00:08:55.373 "claim_type": "exclusive_write", 00:08:55.373 "zoned": false, 00:08:55.373 "supported_io_types": { 00:08:55.373 "read": true, 00:08:55.633 "write": true, 00:08:55.633 "unmap": true, 00:08:55.633 "flush": true, 00:08:55.633 "reset": true, 00:08:55.633 "nvme_admin": false, 00:08:55.633 "nvme_io": false, 00:08:55.633 "nvme_io_md": false, 00:08:55.633 "write_zeroes": true, 00:08:55.633 "zcopy": true, 00:08:55.633 "get_zone_info": false, 00:08:55.633 "zone_management": false, 00:08:55.633 "zone_append": false, 00:08:55.633 "compare": false, 00:08:55.633 "compare_and_write": false, 00:08:55.633 "abort": true, 00:08:55.633 "seek_hole": false, 00:08:55.633 "seek_data": false, 00:08:55.633 "copy": true, 00:08:55.633 "nvme_iov_md": false 00:08:55.633 }, 00:08:55.633 "memory_domains": [ 00:08:55.633 { 00:08:55.633 "dma_device_id": "system", 00:08:55.633 "dma_device_type": 1 00:08:55.633 }, 00:08:55.633 { 00:08:55.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.633 "dma_device_type": 2 00:08:55.633 } 00:08:55.633 ], 00:08:55.633 "driver_specific": {} 00:08:55.633 } 00:08:55.633 ] 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.633 "name": "Existed_Raid", 00:08:55.633 "uuid": "48b7f752-6a49-4de3-a000-f3e3a65b4439", 00:08:55.633 "strip_size_kb": 0, 00:08:55.633 "state": "online", 00:08:55.633 "raid_level": "raid1", 00:08:55.633 "superblock": true, 00:08:55.633 "num_base_bdevs": 2, 00:08:55.633 "num_base_bdevs_discovered": 2, 00:08:55.633 "num_base_bdevs_operational": 2, 00:08:55.633 "base_bdevs_list": [ 00:08:55.633 { 00:08:55.633 "name": "BaseBdev1", 00:08:55.633 "uuid": "83f34936-6e89-4f9c-9d02-89293a07a3c4", 00:08:55.633 "is_configured": true, 00:08:55.633 "data_offset": 2048, 00:08:55.633 "data_size": 63488 00:08:55.633 }, 00:08:55.633 { 00:08:55.633 "name": "BaseBdev2", 00:08:55.633 "uuid": "46eddaf4-3b47-4ddd-8867-0787a0a35eb5", 00:08:55.633 "is_configured": true, 00:08:55.633 "data_offset": 2048, 00:08:55.633 "data_size": 63488 00:08:55.633 } 00:08:55.633 ] 00:08:55.633 }' 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.633 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.893 [2024-09-30 12:26:07.698182] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.893 "name": "Existed_Raid", 00:08:55.893 "aliases": [ 00:08:55.893 "48b7f752-6a49-4de3-a000-f3e3a65b4439" 00:08:55.893 ], 00:08:55.893 "product_name": "Raid Volume", 00:08:55.893 "block_size": 512, 00:08:55.893 "num_blocks": 63488, 00:08:55.893 "uuid": "48b7f752-6a49-4de3-a000-f3e3a65b4439", 00:08:55.893 "assigned_rate_limits": { 00:08:55.893 "rw_ios_per_sec": 0, 00:08:55.893 "rw_mbytes_per_sec": 0, 00:08:55.893 "r_mbytes_per_sec": 0, 00:08:55.893 "w_mbytes_per_sec": 0 00:08:55.893 }, 00:08:55.893 "claimed": false, 00:08:55.893 "zoned": false, 00:08:55.893 "supported_io_types": { 00:08:55.893 "read": true, 00:08:55.893 "write": true, 00:08:55.893 "unmap": false, 00:08:55.893 "flush": false, 00:08:55.893 "reset": true, 00:08:55.893 "nvme_admin": false, 00:08:55.893 "nvme_io": false, 00:08:55.893 "nvme_io_md": false, 00:08:55.893 "write_zeroes": true, 00:08:55.893 "zcopy": false, 00:08:55.893 "get_zone_info": false, 00:08:55.893 "zone_management": false, 00:08:55.893 "zone_append": false, 00:08:55.893 "compare": false, 00:08:55.893 "compare_and_write": false, 00:08:55.893 "abort": false, 00:08:55.893 "seek_hole": false, 00:08:55.893 "seek_data": false, 00:08:55.893 "copy": false, 00:08:55.893 "nvme_iov_md": false 00:08:55.893 }, 00:08:55.893 "memory_domains": [ 00:08:55.893 { 00:08:55.893 "dma_device_id": "system", 00:08:55.893 "dma_device_type": 1 00:08:55.893 }, 00:08:55.893 { 00:08:55.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.893 "dma_device_type": 2 00:08:55.893 }, 00:08:55.893 { 00:08:55.893 "dma_device_id": "system", 00:08:55.893 "dma_device_type": 1 00:08:55.893 }, 00:08:55.893 { 00:08:55.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.893 "dma_device_type": 2 00:08:55.893 } 00:08:55.893 ], 00:08:55.893 "driver_specific": { 00:08:55.893 "raid": { 00:08:55.893 "uuid": "48b7f752-6a49-4de3-a000-f3e3a65b4439", 00:08:55.893 "strip_size_kb": 0, 00:08:55.893 "state": "online", 00:08:55.893 "raid_level": "raid1", 00:08:55.893 "superblock": true, 00:08:55.893 "num_base_bdevs": 2, 00:08:55.893 "num_base_bdevs_discovered": 2, 00:08:55.893 "num_base_bdevs_operational": 2, 00:08:55.893 "base_bdevs_list": [ 00:08:55.893 { 00:08:55.893 "name": "BaseBdev1", 00:08:55.893 "uuid": "83f34936-6e89-4f9c-9d02-89293a07a3c4", 00:08:55.893 "is_configured": true, 00:08:55.893 "data_offset": 2048, 00:08:55.893 "data_size": 63488 00:08:55.893 }, 00:08:55.893 { 00:08:55.893 "name": "BaseBdev2", 00:08:55.893 "uuid": "46eddaf4-3b47-4ddd-8867-0787a0a35eb5", 00:08:55.893 "is_configured": true, 00:08:55.893 "data_offset": 2048, 00:08:55.893 "data_size": 63488 00:08:55.893 } 00:08:55.893 ] 00:08:55.893 } 00:08:55.893 } 00:08:55.893 }' 00:08:55.893 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.156 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:56.156 BaseBdev2' 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.157 12:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.157 [2024-09-30 12:26:07.929564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.157 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.157 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:56.157 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:56.157 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.157 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:56.157 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:56.157 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:56.157 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.157 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.157 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.158 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.158 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:56.158 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.158 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.158 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.158 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.158 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.158 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.158 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.158 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.158 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.424 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.424 "name": "Existed_Raid", 00:08:56.424 "uuid": "48b7f752-6a49-4de3-a000-f3e3a65b4439", 00:08:56.424 "strip_size_kb": 0, 00:08:56.424 "state": "online", 00:08:56.424 "raid_level": "raid1", 00:08:56.424 "superblock": true, 00:08:56.424 "num_base_bdevs": 2, 00:08:56.424 "num_base_bdevs_discovered": 1, 00:08:56.424 "num_base_bdevs_operational": 1, 00:08:56.424 "base_bdevs_list": [ 00:08:56.424 { 00:08:56.424 "name": null, 00:08:56.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.424 "is_configured": false, 00:08:56.424 "data_offset": 0, 00:08:56.424 "data_size": 63488 00:08:56.424 }, 00:08:56.424 { 00:08:56.424 "name": "BaseBdev2", 00:08:56.424 "uuid": "46eddaf4-3b47-4ddd-8867-0787a0a35eb5", 00:08:56.424 "is_configured": true, 00:08:56.424 "data_offset": 2048, 00:08:56.424 "data_size": 63488 00:08:56.424 } 00:08:56.424 ] 00:08:56.424 }' 00:08:56.424 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.424 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.684 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.684 [2024-09-30 12:26:08.516917] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.684 [2024-09-30 12:26:08.517081] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.944 [2024-09-30 12:26:08.609844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.944 [2024-09-30 12:26:08.609969] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.944 [2024-09-30 12:26:08.610021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62849 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62849 ']' 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62849 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62849 00:08:56.944 killing process with pid 62849 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62849' 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62849 00:08:56.944 [2024-09-30 12:26:08.688002] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.944 12:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62849 00:08:56.944 [2024-09-30 12:26:08.704148] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.324 12:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:58.324 00:08:58.324 real 0m5.111s 00:08:58.324 user 0m7.255s 00:08:58.324 sys 0m0.843s 00:08:58.324 12:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.324 12:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.324 ************************************ 00:08:58.324 END TEST raid_state_function_test_sb 00:08:58.324 ************************************ 00:08:58.324 12:26:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:58.324 12:26:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:58.324 12:26:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.324 12:26:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.324 ************************************ 00:08:58.324 START TEST raid_superblock_test 00:08:58.324 ************************************ 00:08:58.324 12:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:58.324 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:58.324 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63101 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63101 00:08:58.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63101 ']' 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.325 12:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.325 [2024-09-30 12:26:10.064586] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:58.325 [2024-09-30 12:26:10.064711] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63101 ] 00:08:58.583 [2024-09-30 12:26:10.220398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.583 [2024-09-30 12:26:10.421740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.843 [2024-09-30 12:26:10.614532] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.843 [2024-09-30 12:26:10.614584] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.102 12:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.102 12:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:59.102 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:59.102 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.102 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:59.102 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:59.102 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:59.102 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.103 malloc1 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.103 [2024-09-30 12:26:10.922877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:59.103 [2024-09-30 12:26:10.923007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.103 [2024-09-30 12:26:10.923051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:59.103 [2024-09-30 12:26:10.923089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.103 [2024-09-30 12:26:10.925214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.103 [2024-09-30 12:26:10.925298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:59.103 pt1 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.103 12:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.363 malloc2 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.363 [2024-09-30 12:26:11.011167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.363 [2024-09-30 12:26:11.011286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.363 [2024-09-30 12:26:11.011327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:59.363 [2024-09-30 12:26:11.011370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.363 [2024-09-30 12:26:11.013443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.363 [2024-09-30 12:26:11.013524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.363 pt2 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.363 [2024-09-30 12:26:11.023218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:59.363 [2024-09-30 12:26:11.025000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.363 [2024-09-30 12:26:11.025171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:59.363 [2024-09-30 12:26:11.025185] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:59.363 [2024-09-30 12:26:11.025425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:59.363 [2024-09-30 12:26:11.025591] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:59.363 [2024-09-30 12:26:11.025605] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:59.363 [2024-09-30 12:26:11.025741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.363 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.363 "name": "raid_bdev1", 00:08:59.363 "uuid": "2747d857-b225-42d8-84c3-3a03c6bb4827", 00:08:59.363 "strip_size_kb": 0, 00:08:59.363 "state": "online", 00:08:59.363 "raid_level": "raid1", 00:08:59.363 "superblock": true, 00:08:59.363 "num_base_bdevs": 2, 00:08:59.363 "num_base_bdevs_discovered": 2, 00:08:59.363 "num_base_bdevs_operational": 2, 00:08:59.363 "base_bdevs_list": [ 00:08:59.363 { 00:08:59.363 "name": "pt1", 00:08:59.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.363 "is_configured": true, 00:08:59.363 "data_offset": 2048, 00:08:59.363 "data_size": 63488 00:08:59.363 }, 00:08:59.363 { 00:08:59.363 "name": "pt2", 00:08:59.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.363 "is_configured": true, 00:08:59.363 "data_offset": 2048, 00:08:59.363 "data_size": 63488 00:08:59.364 } 00:08:59.364 ] 00:08:59.364 }' 00:08:59.364 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.364 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.625 [2024-09-30 12:26:11.458772] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.625 "name": "raid_bdev1", 00:08:59.625 "aliases": [ 00:08:59.625 "2747d857-b225-42d8-84c3-3a03c6bb4827" 00:08:59.625 ], 00:08:59.625 "product_name": "Raid Volume", 00:08:59.625 "block_size": 512, 00:08:59.625 "num_blocks": 63488, 00:08:59.625 "uuid": "2747d857-b225-42d8-84c3-3a03c6bb4827", 00:08:59.625 "assigned_rate_limits": { 00:08:59.625 "rw_ios_per_sec": 0, 00:08:59.625 "rw_mbytes_per_sec": 0, 00:08:59.625 "r_mbytes_per_sec": 0, 00:08:59.625 "w_mbytes_per_sec": 0 00:08:59.625 }, 00:08:59.625 "claimed": false, 00:08:59.625 "zoned": false, 00:08:59.625 "supported_io_types": { 00:08:59.625 "read": true, 00:08:59.625 "write": true, 00:08:59.625 "unmap": false, 00:08:59.625 "flush": false, 00:08:59.625 "reset": true, 00:08:59.625 "nvme_admin": false, 00:08:59.625 "nvme_io": false, 00:08:59.625 "nvme_io_md": false, 00:08:59.625 "write_zeroes": true, 00:08:59.625 "zcopy": false, 00:08:59.625 "get_zone_info": false, 00:08:59.625 "zone_management": false, 00:08:59.625 "zone_append": false, 00:08:59.625 "compare": false, 00:08:59.625 "compare_and_write": false, 00:08:59.625 "abort": false, 00:08:59.625 "seek_hole": false, 00:08:59.625 "seek_data": false, 00:08:59.625 "copy": false, 00:08:59.625 "nvme_iov_md": false 00:08:59.625 }, 00:08:59.625 "memory_domains": [ 00:08:59.625 { 00:08:59.625 "dma_device_id": "system", 00:08:59.625 "dma_device_type": 1 00:08:59.625 }, 00:08:59.625 { 00:08:59.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.625 "dma_device_type": 2 00:08:59.625 }, 00:08:59.625 { 00:08:59.625 "dma_device_id": "system", 00:08:59.625 "dma_device_type": 1 00:08:59.625 }, 00:08:59.625 { 00:08:59.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.625 "dma_device_type": 2 00:08:59.625 } 00:08:59.625 ], 00:08:59.625 "driver_specific": { 00:08:59.625 "raid": { 00:08:59.625 "uuid": "2747d857-b225-42d8-84c3-3a03c6bb4827", 00:08:59.625 "strip_size_kb": 0, 00:08:59.625 "state": "online", 00:08:59.625 "raid_level": "raid1", 00:08:59.625 "superblock": true, 00:08:59.625 "num_base_bdevs": 2, 00:08:59.625 "num_base_bdevs_discovered": 2, 00:08:59.625 "num_base_bdevs_operational": 2, 00:08:59.625 "base_bdevs_list": [ 00:08:59.625 { 00:08:59.625 "name": "pt1", 00:08:59.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.625 "is_configured": true, 00:08:59.625 "data_offset": 2048, 00:08:59.625 "data_size": 63488 00:08:59.625 }, 00:08:59.625 { 00:08:59.625 "name": "pt2", 00:08:59.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.625 "is_configured": true, 00:08:59.625 "data_offset": 2048, 00:08:59.625 "data_size": 63488 00:08:59.625 } 00:08:59.625 ] 00:08:59.625 } 00:08:59.625 } 00:08:59.625 }' 00:08:59.625 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:59.886 pt2' 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.886 [2024-09-30 12:26:11.690264] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2747d857-b225-42d8-84c3-3a03c6bb4827 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2747d857-b225-42d8-84c3-3a03c6bb4827 ']' 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.886 [2024-09-30 12:26:11.737964] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.886 [2024-09-30 12:26:11.737990] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.886 [2024-09-30 12:26:11.738065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.886 [2024-09-30 12:26:11.738130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.886 [2024-09-30 12:26:11.738143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.886 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.155 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:00.155 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.156 [2024-09-30 12:26:11.865797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:00.156 [2024-09-30 12:26:11.867556] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:00.156 [2024-09-30 12:26:11.867622] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:00.156 [2024-09-30 12:26:11.867673] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:00.156 [2024-09-30 12:26:11.867689] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.156 [2024-09-30 12:26:11.867701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:00.156 request: 00:09:00.156 { 00:09:00.156 "name": "raid_bdev1", 00:09:00.156 "raid_level": "raid1", 00:09:00.156 "base_bdevs": [ 00:09:00.156 "malloc1", 00:09:00.156 "malloc2" 00:09:00.156 ], 00:09:00.156 "superblock": false, 00:09:00.156 "method": "bdev_raid_create", 00:09:00.156 "req_id": 1 00:09:00.156 } 00:09:00.156 Got JSON-RPC error response 00:09:00.156 response: 00:09:00.156 { 00:09:00.156 "code": -17, 00:09:00.156 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:00.156 } 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.156 [2024-09-30 12:26:11.929644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:00.156 [2024-09-30 12:26:11.929773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.156 [2024-09-30 12:26:11.929820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:00.156 [2024-09-30 12:26:11.929885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.156 [2024-09-30 12:26:11.932483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.156 [2024-09-30 12:26:11.932581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:00.156 [2024-09-30 12:26:11.932715] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:00.156 [2024-09-30 12:26:11.932844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:00.156 pt1 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.156 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.156 "name": "raid_bdev1", 00:09:00.156 "uuid": "2747d857-b225-42d8-84c3-3a03c6bb4827", 00:09:00.156 "strip_size_kb": 0, 00:09:00.156 "state": "configuring", 00:09:00.156 "raid_level": "raid1", 00:09:00.156 "superblock": true, 00:09:00.156 "num_base_bdevs": 2, 00:09:00.156 "num_base_bdevs_discovered": 1, 00:09:00.156 "num_base_bdevs_operational": 2, 00:09:00.156 "base_bdevs_list": [ 00:09:00.156 { 00:09:00.156 "name": "pt1", 00:09:00.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.156 "is_configured": true, 00:09:00.156 "data_offset": 2048, 00:09:00.156 "data_size": 63488 00:09:00.156 }, 00:09:00.156 { 00:09:00.156 "name": null, 00:09:00.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.157 "is_configured": false, 00:09:00.157 "data_offset": 2048, 00:09:00.157 "data_size": 63488 00:09:00.157 } 00:09:00.157 ] 00:09:00.157 }' 00:09:00.157 12:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.157 12:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.739 [2024-09-30 12:26:12.396873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:00.739 [2024-09-30 12:26:12.396956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.739 [2024-09-30 12:26:12.396980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:00.739 [2024-09-30 12:26:12.396993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.739 [2024-09-30 12:26:12.397512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.739 [2024-09-30 12:26:12.397535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:00.739 [2024-09-30 12:26:12.397624] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:00.739 [2024-09-30 12:26:12.397651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:00.739 [2024-09-30 12:26:12.397793] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:00.739 [2024-09-30 12:26:12.397813] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:00.739 [2024-09-30 12:26:12.398058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:00.739 [2024-09-30 12:26:12.398234] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:00.739 [2024-09-30 12:26:12.398244] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:00.739 [2024-09-30 12:26:12.398399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.739 pt2 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.739 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.740 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.740 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.740 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.740 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.740 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.740 "name": "raid_bdev1", 00:09:00.740 "uuid": "2747d857-b225-42d8-84c3-3a03c6bb4827", 00:09:00.740 "strip_size_kb": 0, 00:09:00.740 "state": "online", 00:09:00.740 "raid_level": "raid1", 00:09:00.740 "superblock": true, 00:09:00.740 "num_base_bdevs": 2, 00:09:00.740 "num_base_bdevs_discovered": 2, 00:09:00.740 "num_base_bdevs_operational": 2, 00:09:00.740 "base_bdevs_list": [ 00:09:00.740 { 00:09:00.740 "name": "pt1", 00:09:00.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.740 "is_configured": true, 00:09:00.740 "data_offset": 2048, 00:09:00.740 "data_size": 63488 00:09:00.740 }, 00:09:00.740 { 00:09:00.740 "name": "pt2", 00:09:00.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.740 "is_configured": true, 00:09:00.740 "data_offset": 2048, 00:09:00.740 "data_size": 63488 00:09:00.740 } 00:09:00.740 ] 00:09:00.740 }' 00:09:00.740 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.740 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.999 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.000 [2024-09-30 12:26:12.796410] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.000 "name": "raid_bdev1", 00:09:01.000 "aliases": [ 00:09:01.000 "2747d857-b225-42d8-84c3-3a03c6bb4827" 00:09:01.000 ], 00:09:01.000 "product_name": "Raid Volume", 00:09:01.000 "block_size": 512, 00:09:01.000 "num_blocks": 63488, 00:09:01.000 "uuid": "2747d857-b225-42d8-84c3-3a03c6bb4827", 00:09:01.000 "assigned_rate_limits": { 00:09:01.000 "rw_ios_per_sec": 0, 00:09:01.000 "rw_mbytes_per_sec": 0, 00:09:01.000 "r_mbytes_per_sec": 0, 00:09:01.000 "w_mbytes_per_sec": 0 00:09:01.000 }, 00:09:01.000 "claimed": false, 00:09:01.000 "zoned": false, 00:09:01.000 "supported_io_types": { 00:09:01.000 "read": true, 00:09:01.000 "write": true, 00:09:01.000 "unmap": false, 00:09:01.000 "flush": false, 00:09:01.000 "reset": true, 00:09:01.000 "nvme_admin": false, 00:09:01.000 "nvme_io": false, 00:09:01.000 "nvme_io_md": false, 00:09:01.000 "write_zeroes": true, 00:09:01.000 "zcopy": false, 00:09:01.000 "get_zone_info": false, 00:09:01.000 "zone_management": false, 00:09:01.000 "zone_append": false, 00:09:01.000 "compare": false, 00:09:01.000 "compare_and_write": false, 00:09:01.000 "abort": false, 00:09:01.000 "seek_hole": false, 00:09:01.000 "seek_data": false, 00:09:01.000 "copy": false, 00:09:01.000 "nvme_iov_md": false 00:09:01.000 }, 00:09:01.000 "memory_domains": [ 00:09:01.000 { 00:09:01.000 "dma_device_id": "system", 00:09:01.000 "dma_device_type": 1 00:09:01.000 }, 00:09:01.000 { 00:09:01.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.000 "dma_device_type": 2 00:09:01.000 }, 00:09:01.000 { 00:09:01.000 "dma_device_id": "system", 00:09:01.000 "dma_device_type": 1 00:09:01.000 }, 00:09:01.000 { 00:09:01.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.000 "dma_device_type": 2 00:09:01.000 } 00:09:01.000 ], 00:09:01.000 "driver_specific": { 00:09:01.000 "raid": { 00:09:01.000 "uuid": "2747d857-b225-42d8-84c3-3a03c6bb4827", 00:09:01.000 "strip_size_kb": 0, 00:09:01.000 "state": "online", 00:09:01.000 "raid_level": "raid1", 00:09:01.000 "superblock": true, 00:09:01.000 "num_base_bdevs": 2, 00:09:01.000 "num_base_bdevs_discovered": 2, 00:09:01.000 "num_base_bdevs_operational": 2, 00:09:01.000 "base_bdevs_list": [ 00:09:01.000 { 00:09:01.000 "name": "pt1", 00:09:01.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.000 "is_configured": true, 00:09:01.000 "data_offset": 2048, 00:09:01.000 "data_size": 63488 00:09:01.000 }, 00:09:01.000 { 00:09:01.000 "name": "pt2", 00:09:01.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.000 "is_configured": true, 00:09:01.000 "data_offset": 2048, 00:09:01.000 "data_size": 63488 00:09:01.000 } 00:09:01.000 ] 00:09:01.000 } 00:09:01.000 } 00:09:01.000 }' 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:01.000 pt2' 00:09:01.000 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:01.260 12:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.260 [2024-09-30 12:26:12.996073] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2747d857-b225-42d8-84c3-3a03c6bb4827 '!=' 2747d857-b225-42d8-84c3-3a03c6bb4827 ']' 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.260 [2024-09-30 12:26:13.039813] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.260 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.261 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.261 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.261 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.261 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.261 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.261 "name": "raid_bdev1", 00:09:01.261 "uuid": "2747d857-b225-42d8-84c3-3a03c6bb4827", 00:09:01.261 "strip_size_kb": 0, 00:09:01.261 "state": "online", 00:09:01.261 "raid_level": "raid1", 00:09:01.261 "superblock": true, 00:09:01.261 "num_base_bdevs": 2, 00:09:01.261 "num_base_bdevs_discovered": 1, 00:09:01.261 "num_base_bdevs_operational": 1, 00:09:01.261 "base_bdevs_list": [ 00:09:01.261 { 00:09:01.261 "name": null, 00:09:01.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.261 "is_configured": false, 00:09:01.261 "data_offset": 0, 00:09:01.261 "data_size": 63488 00:09:01.261 }, 00:09:01.261 { 00:09:01.261 "name": "pt2", 00:09:01.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.261 "is_configured": true, 00:09:01.261 "data_offset": 2048, 00:09:01.261 "data_size": 63488 00:09:01.261 } 00:09:01.261 ] 00:09:01.261 }' 00:09:01.261 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.261 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.831 [2024-09-30 12:26:13.479155] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.831 [2024-09-30 12:26:13.479230] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.831 [2024-09-30 12:26:13.479338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.831 [2024-09-30 12:26:13.479406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.831 [2024-09-30 12:26:13.479479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.831 [2024-09-30 12:26:13.555042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:01.831 [2024-09-30 12:26:13.555097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.831 [2024-09-30 12:26:13.555131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:01.831 [2024-09-30 12:26:13.555143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.831 [2024-09-30 12:26:13.557308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.831 [2024-09-30 12:26:13.557354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:01.831 [2024-09-30 12:26:13.557447] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:01.831 [2024-09-30 12:26:13.557494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:01.831 [2024-09-30 12:26:13.557600] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:01.831 [2024-09-30 12:26:13.557613] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:01.831 [2024-09-30 12:26:13.557859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:01.831 [2024-09-30 12:26:13.558095] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:01.831 [2024-09-30 12:26:13.558111] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:01.831 [2024-09-30 12:26:13.558255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.831 pt2 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.831 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.832 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.832 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.832 "name": "raid_bdev1", 00:09:01.832 "uuid": "2747d857-b225-42d8-84c3-3a03c6bb4827", 00:09:01.832 "strip_size_kb": 0, 00:09:01.832 "state": "online", 00:09:01.832 "raid_level": "raid1", 00:09:01.832 "superblock": true, 00:09:01.832 "num_base_bdevs": 2, 00:09:01.832 "num_base_bdevs_discovered": 1, 00:09:01.832 "num_base_bdevs_operational": 1, 00:09:01.832 "base_bdevs_list": [ 00:09:01.832 { 00:09:01.832 "name": null, 00:09:01.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.832 "is_configured": false, 00:09:01.832 "data_offset": 2048, 00:09:01.832 "data_size": 63488 00:09:01.832 }, 00:09:01.832 { 00:09:01.832 "name": "pt2", 00:09:01.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.832 "is_configured": true, 00:09:01.832 "data_offset": 2048, 00:09:01.832 "data_size": 63488 00:09:01.832 } 00:09:01.832 ] 00:09:01.832 }' 00:09:01.832 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.832 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.091 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:02.091 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.091 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.091 [2024-09-30 12:26:13.938349] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:02.091 [2024-09-30 12:26:13.938433] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.091 [2024-09-30 12:26:13.938521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.091 [2024-09-30 12:26:13.938585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.091 [2024-09-30 12:26:13.938662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:02.091 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.091 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.091 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:02.091 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.091 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.091 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.351 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:02.351 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:02.351 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:02.351 12:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:02.351 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.351 12:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.351 [2024-09-30 12:26:13.998263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:02.351 [2024-09-30 12:26:13.998365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.351 [2024-09-30 12:26:13.998404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:02.351 [2024-09-30 12:26:13.998437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.351 [2024-09-30 12:26:14.000586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.351 [2024-09-30 12:26:14.000672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:02.351 [2024-09-30 12:26:14.000791] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:02.351 [2024-09-30 12:26:14.000873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:02.351 [2024-09-30 12:26:14.001048] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:02.351 [2024-09-30 12:26:14.001106] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:02.351 [2024-09-30 12:26:14.001151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:02.351 [2024-09-30 12:26:14.001255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:02.351 [2024-09-30 12:26:14.001373] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:02.351 [2024-09-30 12:26:14.001414] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:02.351 [2024-09-30 12:26:14.001666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:02.351 [2024-09-30 12:26:14.001875] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:02.351 [2024-09-30 12:26:14.001928] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:02.351 [2024-09-30 12:26:14.002111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.351 pt1 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.351 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.351 "name": "raid_bdev1", 00:09:02.352 "uuid": "2747d857-b225-42d8-84c3-3a03c6bb4827", 00:09:02.352 "strip_size_kb": 0, 00:09:02.352 "state": "online", 00:09:02.352 "raid_level": "raid1", 00:09:02.352 "superblock": true, 00:09:02.352 "num_base_bdevs": 2, 00:09:02.352 "num_base_bdevs_discovered": 1, 00:09:02.352 "num_base_bdevs_operational": 1, 00:09:02.352 "base_bdevs_list": [ 00:09:02.352 { 00:09:02.352 "name": null, 00:09:02.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.352 "is_configured": false, 00:09:02.352 "data_offset": 2048, 00:09:02.352 "data_size": 63488 00:09:02.352 }, 00:09:02.352 { 00:09:02.352 "name": "pt2", 00:09:02.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.352 "is_configured": true, 00:09:02.352 "data_offset": 2048, 00:09:02.352 "data_size": 63488 00:09:02.352 } 00:09:02.352 ] 00:09:02.352 }' 00:09:02.352 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.352 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.611 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:02.611 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:02.611 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.611 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.611 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.611 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:02.611 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:02.611 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.611 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.611 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:02.611 [2024-09-30 12:26:14.481659] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.611 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2747d857-b225-42d8-84c3-3a03c6bb4827 '!=' 2747d857-b225-42d8-84c3-3a03c6bb4827 ']' 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63101 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63101 ']' 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63101 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63101 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63101' 00:09:02.870 killing process with pid 63101 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63101 00:09:02.870 [2024-09-30 12:26:14.569422] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.870 [2024-09-30 12:26:14.569505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.870 [2024-09-30 12:26:14.569553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.870 [2024-09-30 12:26:14.569571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:02.870 12:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63101 00:09:02.870 [2024-09-30 12:26:14.764463] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.251 12:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:04.251 00:09:04.251 real 0m6.007s 00:09:04.251 user 0m8.975s 00:09:04.251 sys 0m1.016s 00:09:04.251 12:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.251 ************************************ 00:09:04.251 END TEST raid_superblock_test 00:09:04.251 ************************************ 00:09:04.251 12:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.251 12:26:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:04.251 12:26:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:04.251 12:26:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.251 12:26:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.251 ************************************ 00:09:04.251 START TEST raid_read_error_test 00:09:04.251 ************************************ 00:09:04.251 12:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:09:04.251 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:04.251 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:04.251 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:04.251 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.z15e3Mn34c 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63430 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63430 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63430 ']' 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.252 12:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.512 [2024-09-30 12:26:16.157428] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:04.512 [2024-09-30 12:26:16.157620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63430 ] 00:09:04.512 [2024-09-30 12:26:16.321595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.771 [2024-09-30 12:26:16.517226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.030 [2024-09-30 12:26:16.706697] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.030 [2024-09-30 12:26:16.706769] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.290 12:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.290 12:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:05.290 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.290 12:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:05.290 12:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.290 12:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.290 BaseBdev1_malloc 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.290 true 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.290 [2024-09-30 12:26:17.036691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:05.290 [2024-09-30 12:26:17.036783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.290 [2024-09-30 12:26:17.036803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:05.290 [2024-09-30 12:26:17.036816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.290 [2024-09-30 12:26:17.038888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.290 [2024-09-30 12:26:17.038933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:05.290 BaseBdev1 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.290 BaseBdev2_malloc 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:05.290 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.291 true 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.291 [2024-09-30 12:26:17.111205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:05.291 [2024-09-30 12:26:17.111266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.291 [2024-09-30 12:26:17.111285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:05.291 [2024-09-30 12:26:17.111298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.291 [2024-09-30 12:26:17.113447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.291 [2024-09-30 12:26:17.113552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:05.291 BaseBdev2 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.291 [2024-09-30 12:26:17.123250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.291 [2024-09-30 12:26:17.125087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.291 [2024-09-30 12:26:17.125303] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:05.291 [2024-09-30 12:26:17.125319] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:05.291 [2024-09-30 12:26:17.125551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:05.291 [2024-09-30 12:26:17.125730] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:05.291 [2024-09-30 12:26:17.125756] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:05.291 [2024-09-30 12:26:17.125913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.291 "name": "raid_bdev1", 00:09:05.291 "uuid": "9a36222a-8214-4eb6-91db-1460571ac9e7", 00:09:05.291 "strip_size_kb": 0, 00:09:05.291 "state": "online", 00:09:05.291 "raid_level": "raid1", 00:09:05.291 "superblock": true, 00:09:05.291 "num_base_bdevs": 2, 00:09:05.291 "num_base_bdevs_discovered": 2, 00:09:05.291 "num_base_bdevs_operational": 2, 00:09:05.291 "base_bdevs_list": [ 00:09:05.291 { 00:09:05.291 "name": "BaseBdev1", 00:09:05.291 "uuid": "e4352a42-5fdb-5254-8e86-cd63ec11a733", 00:09:05.291 "is_configured": true, 00:09:05.291 "data_offset": 2048, 00:09:05.291 "data_size": 63488 00:09:05.291 }, 00:09:05.291 { 00:09:05.291 "name": "BaseBdev2", 00:09:05.291 "uuid": "cffe2a0e-89c4-5e3c-9b86-2bf8e182c64e", 00:09:05.291 "is_configured": true, 00:09:05.291 "data_offset": 2048, 00:09:05.291 "data_size": 63488 00:09:05.291 } 00:09:05.291 ] 00:09:05.291 }' 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.291 12:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.860 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:05.860 12:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:05.860 [2024-09-30 12:26:17.631548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.798 "name": "raid_bdev1", 00:09:06.798 "uuid": "9a36222a-8214-4eb6-91db-1460571ac9e7", 00:09:06.798 "strip_size_kb": 0, 00:09:06.798 "state": "online", 00:09:06.798 "raid_level": "raid1", 00:09:06.798 "superblock": true, 00:09:06.798 "num_base_bdevs": 2, 00:09:06.798 "num_base_bdevs_discovered": 2, 00:09:06.798 "num_base_bdevs_operational": 2, 00:09:06.798 "base_bdevs_list": [ 00:09:06.798 { 00:09:06.798 "name": "BaseBdev1", 00:09:06.798 "uuid": "e4352a42-5fdb-5254-8e86-cd63ec11a733", 00:09:06.798 "is_configured": true, 00:09:06.798 "data_offset": 2048, 00:09:06.798 "data_size": 63488 00:09:06.798 }, 00:09:06.798 { 00:09:06.798 "name": "BaseBdev2", 00:09:06.798 "uuid": "cffe2a0e-89c4-5e3c-9b86-2bf8e182c64e", 00:09:06.798 "is_configured": true, 00:09:06.798 "data_offset": 2048, 00:09:06.798 "data_size": 63488 00:09:06.798 } 00:09:06.798 ] 00:09:06.798 }' 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.798 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.366 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:07.366 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.366 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.366 [2024-09-30 12:26:18.970816] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.366 [2024-09-30 12:26:18.970858] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.366 [2024-09-30 12:26:18.973515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.366 [2024-09-30 12:26:18.973601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.366 [2024-09-30 12:26:18.973724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.366 [2024-09-30 12:26:18.973795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:07.366 { 00:09:07.366 "results": [ 00:09:07.366 { 00:09:07.366 "job": "raid_bdev1", 00:09:07.366 "core_mask": "0x1", 00:09:07.366 "workload": "randrw", 00:09:07.366 "percentage": 50, 00:09:07.366 "status": "finished", 00:09:07.366 "queue_depth": 1, 00:09:07.366 "io_size": 131072, 00:09:07.366 "runtime": 1.340019, 00:09:07.366 "iops": 18250.487493087785, 00:09:07.366 "mibps": 2281.310936635973, 00:09:07.366 "io_failed": 0, 00:09:07.366 "io_timeout": 0, 00:09:07.366 "avg_latency_us": 52.09644641191453, 00:09:07.366 "min_latency_us": 22.805240174672488, 00:09:07.366 "max_latency_us": 1373.6803493449781 00:09:07.366 } 00:09:07.366 ], 00:09:07.366 "core_count": 1 00:09:07.366 } 00:09:07.366 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.366 12:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63430 00:09:07.366 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63430 ']' 00:09:07.366 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63430 00:09:07.366 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:07.366 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.366 12:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63430 00:09:07.366 12:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.366 12:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.366 12:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63430' 00:09:07.366 killing process with pid 63430 00:09:07.366 12:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63430 00:09:07.366 [2024-09-30 12:26:19.013068] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.366 12:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63430 00:09:07.366 [2024-09-30 12:26:19.141241] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.745 12:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:08.745 12:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.z15e3Mn34c 00:09:08.745 12:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:08.745 12:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:08.745 12:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:08.745 ************************************ 00:09:08.745 END TEST raid_read_error_test 00:09:08.745 ************************************ 00:09:08.745 12:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.745 12:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:08.745 12:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:08.745 00:09:08.745 real 0m4.364s 00:09:08.745 user 0m5.157s 00:09:08.745 sys 0m0.525s 00:09:08.745 12:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.745 12:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.745 12:26:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:08.745 12:26:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:08.745 12:26:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.745 12:26:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.745 ************************************ 00:09:08.745 START TEST raid_write_error_test 00:09:08.745 ************************************ 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ovJoR1gh0B 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63577 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63577 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63577 ']' 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.745 12:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.745 [2024-09-30 12:26:20.595564] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:08.745 [2024-09-30 12:26:20.595806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63577 ] 00:09:09.005 [2024-09-30 12:26:20.755062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.265 [2024-09-30 12:26:20.951426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.265 [2024-09-30 12:26:21.140779] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.265 [2024-09-30 12:26:21.140820] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.524 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.524 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:09.524 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.524 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:09.524 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.524 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.784 BaseBdev1_malloc 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.784 true 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.784 [2024-09-30 12:26:21.464648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:09.784 [2024-09-30 12:26:21.464785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.784 [2024-09-30 12:26:21.464810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:09.784 [2024-09-30 12:26:21.464824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.784 [2024-09-30 12:26:21.466976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.784 [2024-09-30 12:26:21.467031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:09.784 BaseBdev1 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.784 BaseBdev2_malloc 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.784 true 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.784 [2024-09-30 12:26:21.561112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:09.784 [2024-09-30 12:26:21.561169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.784 [2024-09-30 12:26:21.561203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:09.784 [2024-09-30 12:26:21.561215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.784 [2024-09-30 12:26:21.563240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.784 [2024-09-30 12:26:21.563286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:09.784 BaseBdev2 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.784 [2024-09-30 12:26:21.573175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.784 [2024-09-30 12:26:21.574990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.784 [2024-09-30 12:26:21.575246] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:09.784 [2024-09-30 12:26:21.575269] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:09.784 [2024-09-30 12:26:21.575522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:09.784 [2024-09-30 12:26:21.575700] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:09.784 [2024-09-30 12:26:21.575712] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:09.784 [2024-09-30 12:26:21.575890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.784 "name": "raid_bdev1", 00:09:09.784 "uuid": "cc521027-aafc-4dd8-abdd-f8a7b552a2cc", 00:09:09.784 "strip_size_kb": 0, 00:09:09.784 "state": "online", 00:09:09.784 "raid_level": "raid1", 00:09:09.784 "superblock": true, 00:09:09.784 "num_base_bdevs": 2, 00:09:09.784 "num_base_bdevs_discovered": 2, 00:09:09.784 "num_base_bdevs_operational": 2, 00:09:09.784 "base_bdevs_list": [ 00:09:09.784 { 00:09:09.784 "name": "BaseBdev1", 00:09:09.784 "uuid": "5a21c781-142e-5903-a3c2-e93f23aca1a0", 00:09:09.784 "is_configured": true, 00:09:09.784 "data_offset": 2048, 00:09:09.784 "data_size": 63488 00:09:09.784 }, 00:09:09.784 { 00:09:09.784 "name": "BaseBdev2", 00:09:09.784 "uuid": "1df78d8c-2a1b-5b38-bc3b-b463c149534a", 00:09:09.784 "is_configured": true, 00:09:09.784 "data_offset": 2048, 00:09:09.784 "data_size": 63488 00:09:09.784 } 00:09:09.784 ] 00:09:09.784 }' 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.784 12:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.354 12:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:10.354 12:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:10.354 [2024-09-30 12:26:22.117513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.293 [2024-09-30 12:26:23.061660] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:11.293 [2024-09-30 12:26:23.061853] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.293 [2024-09-30 12:26:23.062096] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.293 "name": "raid_bdev1", 00:09:11.293 "uuid": "cc521027-aafc-4dd8-abdd-f8a7b552a2cc", 00:09:11.293 "strip_size_kb": 0, 00:09:11.293 "state": "online", 00:09:11.293 "raid_level": "raid1", 00:09:11.293 "superblock": true, 00:09:11.293 "num_base_bdevs": 2, 00:09:11.293 "num_base_bdevs_discovered": 1, 00:09:11.293 "num_base_bdevs_operational": 1, 00:09:11.293 "base_bdevs_list": [ 00:09:11.293 { 00:09:11.293 "name": null, 00:09:11.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.293 "is_configured": false, 00:09:11.293 "data_offset": 0, 00:09:11.293 "data_size": 63488 00:09:11.293 }, 00:09:11.293 { 00:09:11.293 "name": "BaseBdev2", 00:09:11.293 "uuid": "1df78d8c-2a1b-5b38-bc3b-b463c149534a", 00:09:11.293 "is_configured": true, 00:09:11.293 "data_offset": 2048, 00:09:11.293 "data_size": 63488 00:09:11.293 } 00:09:11.293 ] 00:09:11.293 }' 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.293 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.863 [2024-09-30 12:26:23.494375] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.863 [2024-09-30 12:26:23.494413] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.863 [2024-09-30 12:26:23.497007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.863 [2024-09-30 12:26:23.497083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.863 [2024-09-30 12:26:23.497179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.863 [2024-09-30 12:26:23.497231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:11.863 { 00:09:11.863 "results": [ 00:09:11.863 { 00:09:11.863 "job": "raid_bdev1", 00:09:11.863 "core_mask": "0x1", 00:09:11.863 "workload": "randrw", 00:09:11.863 "percentage": 50, 00:09:11.863 "status": "finished", 00:09:11.863 "queue_depth": 1, 00:09:11.863 "io_size": 131072, 00:09:11.863 "runtime": 1.377676, 00:09:11.863 "iops": 21448.43925567405, 00:09:11.863 "mibps": 2681.054906959256, 00:09:11.863 "io_failed": 0, 00:09:11.863 "io_timeout": 0, 00:09:11.863 "avg_latency_us": 43.95959626531078, 00:09:11.863 "min_latency_us": 22.69344978165939, 00:09:11.863 "max_latency_us": 1359.3711790393013 00:09:11.863 } 00:09:11.863 ], 00:09:11.863 "core_count": 1 00:09:11.863 } 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63577 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63577 ']' 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63577 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63577 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63577' 00:09:11.863 killing process with pid 63577 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63577 00:09:11.863 [2024-09-30 12:26:23.544538] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.863 12:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63577 00:09:11.863 [2024-09-30 12:26:23.678528] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.244 12:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ovJoR1gh0B 00:09:13.244 12:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:13.244 12:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:13.244 12:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:13.244 12:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:13.244 12:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.244 12:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:13.244 12:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:13.244 00:09:13.244 real 0m4.465s 00:09:13.244 user 0m5.272s 00:09:13.244 sys 0m0.544s 00:09:13.244 12:26:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.244 ************************************ 00:09:13.244 END TEST raid_write_error_test 00:09:13.244 ************************************ 00:09:13.244 12:26:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.244 12:26:25 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:13.244 12:26:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:13.244 12:26:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:13.244 12:26:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:13.244 12:26:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.244 12:26:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.244 ************************************ 00:09:13.244 START TEST raid_state_function_test 00:09:13.244 ************************************ 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63715 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63715' 00:09:13.244 Process raid pid: 63715 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63715 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63715 ']' 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.244 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.244 [2024-09-30 12:26:25.125859] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:13.244 [2024-09-30 12:26:25.126054] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.504 [2024-09-30 12:26:25.288652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.763 [2024-09-30 12:26:25.497344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.021 [2024-09-30 12:26:25.723683] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.021 [2024-09-30 12:26:25.723806] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.280 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.280 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:14.280 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.280 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.280 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.280 [2024-09-30 12:26:25.928233] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.280 [2024-09-30 12:26:25.928295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.280 [2024-09-30 12:26:25.928308] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.280 [2024-09-30 12:26:25.928320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.280 [2024-09-30 12:26:25.928328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.280 [2024-09-30 12:26:25.928341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.280 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.280 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.280 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.281 "name": "Existed_Raid", 00:09:14.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.281 "strip_size_kb": 64, 00:09:14.281 "state": "configuring", 00:09:14.281 "raid_level": "raid0", 00:09:14.281 "superblock": false, 00:09:14.281 "num_base_bdevs": 3, 00:09:14.281 "num_base_bdevs_discovered": 0, 00:09:14.281 "num_base_bdevs_operational": 3, 00:09:14.281 "base_bdevs_list": [ 00:09:14.281 { 00:09:14.281 "name": "BaseBdev1", 00:09:14.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.281 "is_configured": false, 00:09:14.281 "data_offset": 0, 00:09:14.281 "data_size": 0 00:09:14.281 }, 00:09:14.281 { 00:09:14.281 "name": "BaseBdev2", 00:09:14.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.281 "is_configured": false, 00:09:14.281 "data_offset": 0, 00:09:14.281 "data_size": 0 00:09:14.281 }, 00:09:14.281 { 00:09:14.281 "name": "BaseBdev3", 00:09:14.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.281 "is_configured": false, 00:09:14.281 "data_offset": 0, 00:09:14.281 "data_size": 0 00:09:14.281 } 00:09:14.281 ] 00:09:14.281 }' 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.281 12:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.540 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.540 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.540 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.540 [2024-09-30 12:26:26.335580] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.540 [2024-09-30 12:26:26.335692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:14.540 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.540 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.540 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.540 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.540 [2024-09-30 12:26:26.347555] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.540 [2024-09-30 12:26:26.347650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.540 [2024-09-30 12:26:26.347681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.540 [2024-09-30 12:26:26.347709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.540 [2024-09-30 12:26:26.347731] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.540 [2024-09-30 12:26:26.347808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.540 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.540 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.540 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.540 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.540 [2024-09-30 12:26:26.407695] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.540 BaseBdev1 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.541 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.800 [ 00:09:14.800 { 00:09:14.800 "name": "BaseBdev1", 00:09:14.800 "aliases": [ 00:09:14.801 "b9de1bc7-effd-4f9b-a14e-d15e02a95c46" 00:09:14.801 ], 00:09:14.801 "product_name": "Malloc disk", 00:09:14.801 "block_size": 512, 00:09:14.801 "num_blocks": 65536, 00:09:14.801 "uuid": "b9de1bc7-effd-4f9b-a14e-d15e02a95c46", 00:09:14.801 "assigned_rate_limits": { 00:09:14.801 "rw_ios_per_sec": 0, 00:09:14.801 "rw_mbytes_per_sec": 0, 00:09:14.801 "r_mbytes_per_sec": 0, 00:09:14.801 "w_mbytes_per_sec": 0 00:09:14.801 }, 00:09:14.801 "claimed": true, 00:09:14.801 "claim_type": "exclusive_write", 00:09:14.801 "zoned": false, 00:09:14.801 "supported_io_types": { 00:09:14.801 "read": true, 00:09:14.801 "write": true, 00:09:14.801 "unmap": true, 00:09:14.801 "flush": true, 00:09:14.801 "reset": true, 00:09:14.801 "nvme_admin": false, 00:09:14.801 "nvme_io": false, 00:09:14.801 "nvme_io_md": false, 00:09:14.801 "write_zeroes": true, 00:09:14.801 "zcopy": true, 00:09:14.801 "get_zone_info": false, 00:09:14.801 "zone_management": false, 00:09:14.801 "zone_append": false, 00:09:14.801 "compare": false, 00:09:14.801 "compare_and_write": false, 00:09:14.801 "abort": true, 00:09:14.801 "seek_hole": false, 00:09:14.801 "seek_data": false, 00:09:14.801 "copy": true, 00:09:14.801 "nvme_iov_md": false 00:09:14.801 }, 00:09:14.801 "memory_domains": [ 00:09:14.801 { 00:09:14.801 "dma_device_id": "system", 00:09:14.801 "dma_device_type": 1 00:09:14.801 }, 00:09:14.801 { 00:09:14.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.801 "dma_device_type": 2 00:09:14.801 } 00:09:14.801 ], 00:09:14.801 "driver_specific": {} 00:09:14.801 } 00:09:14.801 ] 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.801 "name": "Existed_Raid", 00:09:14.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.801 "strip_size_kb": 64, 00:09:14.801 "state": "configuring", 00:09:14.801 "raid_level": "raid0", 00:09:14.801 "superblock": false, 00:09:14.801 "num_base_bdevs": 3, 00:09:14.801 "num_base_bdevs_discovered": 1, 00:09:14.801 "num_base_bdevs_operational": 3, 00:09:14.801 "base_bdevs_list": [ 00:09:14.801 { 00:09:14.801 "name": "BaseBdev1", 00:09:14.801 "uuid": "b9de1bc7-effd-4f9b-a14e-d15e02a95c46", 00:09:14.801 "is_configured": true, 00:09:14.801 "data_offset": 0, 00:09:14.801 "data_size": 65536 00:09:14.801 }, 00:09:14.801 { 00:09:14.801 "name": "BaseBdev2", 00:09:14.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.801 "is_configured": false, 00:09:14.801 "data_offset": 0, 00:09:14.801 "data_size": 0 00:09:14.801 }, 00:09:14.801 { 00:09:14.801 "name": "BaseBdev3", 00:09:14.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.801 "is_configured": false, 00:09:14.801 "data_offset": 0, 00:09:14.801 "data_size": 0 00:09:14.801 } 00:09:14.801 ] 00:09:14.801 }' 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.801 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.064 [2024-09-30 12:26:26.918881] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.064 [2024-09-30 12:26:26.918997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.064 [2024-09-30 12:26:26.926955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.064 [2024-09-30 12:26:26.928897] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.064 [2024-09-30 12:26:26.928987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.064 [2024-09-30 12:26:26.929021] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.064 [2024-09-30 12:26:26.929049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.064 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.345 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.345 "name": "Existed_Raid", 00:09:15.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.345 "strip_size_kb": 64, 00:09:15.345 "state": "configuring", 00:09:15.345 "raid_level": "raid0", 00:09:15.345 "superblock": false, 00:09:15.345 "num_base_bdevs": 3, 00:09:15.345 "num_base_bdevs_discovered": 1, 00:09:15.345 "num_base_bdevs_operational": 3, 00:09:15.345 "base_bdevs_list": [ 00:09:15.345 { 00:09:15.345 "name": "BaseBdev1", 00:09:15.345 "uuid": "b9de1bc7-effd-4f9b-a14e-d15e02a95c46", 00:09:15.345 "is_configured": true, 00:09:15.345 "data_offset": 0, 00:09:15.345 "data_size": 65536 00:09:15.345 }, 00:09:15.345 { 00:09:15.345 "name": "BaseBdev2", 00:09:15.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.345 "is_configured": false, 00:09:15.345 "data_offset": 0, 00:09:15.345 "data_size": 0 00:09:15.345 }, 00:09:15.345 { 00:09:15.345 "name": "BaseBdev3", 00:09:15.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.345 "is_configured": false, 00:09:15.345 "data_offset": 0, 00:09:15.345 "data_size": 0 00:09:15.345 } 00:09:15.345 ] 00:09:15.345 }' 00:09:15.345 12:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.345 12:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.608 [2024-09-30 12:26:27.382900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.608 BaseBdev2 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.608 [ 00:09:15.608 { 00:09:15.608 "name": "BaseBdev2", 00:09:15.608 "aliases": [ 00:09:15.608 "2392877a-8758-40d4-b571-5e2e821e1f82" 00:09:15.608 ], 00:09:15.608 "product_name": "Malloc disk", 00:09:15.608 "block_size": 512, 00:09:15.608 "num_blocks": 65536, 00:09:15.608 "uuid": "2392877a-8758-40d4-b571-5e2e821e1f82", 00:09:15.608 "assigned_rate_limits": { 00:09:15.608 "rw_ios_per_sec": 0, 00:09:15.608 "rw_mbytes_per_sec": 0, 00:09:15.608 "r_mbytes_per_sec": 0, 00:09:15.608 "w_mbytes_per_sec": 0 00:09:15.608 }, 00:09:15.608 "claimed": true, 00:09:15.608 "claim_type": "exclusive_write", 00:09:15.608 "zoned": false, 00:09:15.608 "supported_io_types": { 00:09:15.608 "read": true, 00:09:15.608 "write": true, 00:09:15.608 "unmap": true, 00:09:15.608 "flush": true, 00:09:15.608 "reset": true, 00:09:15.608 "nvme_admin": false, 00:09:15.608 "nvme_io": false, 00:09:15.608 "nvme_io_md": false, 00:09:15.608 "write_zeroes": true, 00:09:15.608 "zcopy": true, 00:09:15.608 "get_zone_info": false, 00:09:15.608 "zone_management": false, 00:09:15.608 "zone_append": false, 00:09:15.608 "compare": false, 00:09:15.608 "compare_and_write": false, 00:09:15.608 "abort": true, 00:09:15.608 "seek_hole": false, 00:09:15.608 "seek_data": false, 00:09:15.608 "copy": true, 00:09:15.608 "nvme_iov_md": false 00:09:15.608 }, 00:09:15.608 "memory_domains": [ 00:09:15.608 { 00:09:15.608 "dma_device_id": "system", 00:09:15.608 "dma_device_type": 1 00:09:15.608 }, 00:09:15.608 { 00:09:15.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.608 "dma_device_type": 2 00:09:15.608 } 00:09:15.608 ], 00:09:15.608 "driver_specific": {} 00:09:15.608 } 00:09:15.608 ] 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.608 "name": "Existed_Raid", 00:09:15.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.608 "strip_size_kb": 64, 00:09:15.608 "state": "configuring", 00:09:15.608 "raid_level": "raid0", 00:09:15.608 "superblock": false, 00:09:15.608 "num_base_bdevs": 3, 00:09:15.608 "num_base_bdevs_discovered": 2, 00:09:15.608 "num_base_bdevs_operational": 3, 00:09:15.608 "base_bdevs_list": [ 00:09:15.608 { 00:09:15.608 "name": "BaseBdev1", 00:09:15.608 "uuid": "b9de1bc7-effd-4f9b-a14e-d15e02a95c46", 00:09:15.608 "is_configured": true, 00:09:15.608 "data_offset": 0, 00:09:15.608 "data_size": 65536 00:09:15.608 }, 00:09:15.608 { 00:09:15.608 "name": "BaseBdev2", 00:09:15.608 "uuid": "2392877a-8758-40d4-b571-5e2e821e1f82", 00:09:15.608 "is_configured": true, 00:09:15.608 "data_offset": 0, 00:09:15.608 "data_size": 65536 00:09:15.608 }, 00:09:15.608 { 00:09:15.608 "name": "BaseBdev3", 00:09:15.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.608 "is_configured": false, 00:09:15.608 "data_offset": 0, 00:09:15.608 "data_size": 0 00:09:15.608 } 00:09:15.608 ] 00:09:15.608 }' 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.608 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.178 [2024-09-30 12:26:27.900827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:16.178 [2024-09-30 12:26:27.900872] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:16.178 [2024-09-30 12:26:27.900888] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:16.178 [2024-09-30 12:26:27.901155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:16.178 [2024-09-30 12:26:27.901348] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:16.178 [2024-09-30 12:26:27.901361] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:16.178 [2024-09-30 12:26:27.901635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.178 BaseBdev3 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.178 [ 00:09:16.178 { 00:09:16.178 "name": "BaseBdev3", 00:09:16.178 "aliases": [ 00:09:16.178 "3dcc10fa-ec05-4bf2-a609-189dc1bdb603" 00:09:16.178 ], 00:09:16.178 "product_name": "Malloc disk", 00:09:16.178 "block_size": 512, 00:09:16.178 "num_blocks": 65536, 00:09:16.178 "uuid": "3dcc10fa-ec05-4bf2-a609-189dc1bdb603", 00:09:16.178 "assigned_rate_limits": { 00:09:16.178 "rw_ios_per_sec": 0, 00:09:16.178 "rw_mbytes_per_sec": 0, 00:09:16.178 "r_mbytes_per_sec": 0, 00:09:16.178 "w_mbytes_per_sec": 0 00:09:16.178 }, 00:09:16.178 "claimed": true, 00:09:16.178 "claim_type": "exclusive_write", 00:09:16.178 "zoned": false, 00:09:16.178 "supported_io_types": { 00:09:16.178 "read": true, 00:09:16.178 "write": true, 00:09:16.178 "unmap": true, 00:09:16.178 "flush": true, 00:09:16.178 "reset": true, 00:09:16.178 "nvme_admin": false, 00:09:16.178 "nvme_io": false, 00:09:16.178 "nvme_io_md": false, 00:09:16.178 "write_zeroes": true, 00:09:16.178 "zcopy": true, 00:09:16.178 "get_zone_info": false, 00:09:16.178 "zone_management": false, 00:09:16.178 "zone_append": false, 00:09:16.178 "compare": false, 00:09:16.178 "compare_and_write": false, 00:09:16.178 "abort": true, 00:09:16.178 "seek_hole": false, 00:09:16.178 "seek_data": false, 00:09:16.178 "copy": true, 00:09:16.178 "nvme_iov_md": false 00:09:16.178 }, 00:09:16.178 "memory_domains": [ 00:09:16.178 { 00:09:16.178 "dma_device_id": "system", 00:09:16.178 "dma_device_type": 1 00:09:16.178 }, 00:09:16.178 { 00:09:16.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.178 "dma_device_type": 2 00:09:16.178 } 00:09:16.178 ], 00:09:16.178 "driver_specific": {} 00:09:16.178 } 00:09:16.178 ] 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.178 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.179 "name": "Existed_Raid", 00:09:16.179 "uuid": "13de0357-5d98-42e7-a610-5bd1da2086fa", 00:09:16.179 "strip_size_kb": 64, 00:09:16.179 "state": "online", 00:09:16.179 "raid_level": "raid0", 00:09:16.179 "superblock": false, 00:09:16.179 "num_base_bdevs": 3, 00:09:16.179 "num_base_bdevs_discovered": 3, 00:09:16.179 "num_base_bdevs_operational": 3, 00:09:16.179 "base_bdevs_list": [ 00:09:16.179 { 00:09:16.179 "name": "BaseBdev1", 00:09:16.179 "uuid": "b9de1bc7-effd-4f9b-a14e-d15e02a95c46", 00:09:16.179 "is_configured": true, 00:09:16.179 "data_offset": 0, 00:09:16.179 "data_size": 65536 00:09:16.179 }, 00:09:16.179 { 00:09:16.179 "name": "BaseBdev2", 00:09:16.179 "uuid": "2392877a-8758-40d4-b571-5e2e821e1f82", 00:09:16.179 "is_configured": true, 00:09:16.179 "data_offset": 0, 00:09:16.179 "data_size": 65536 00:09:16.179 }, 00:09:16.179 { 00:09:16.179 "name": "BaseBdev3", 00:09:16.179 "uuid": "3dcc10fa-ec05-4bf2-a609-189dc1bdb603", 00:09:16.179 "is_configured": true, 00:09:16.179 "data_offset": 0, 00:09:16.179 "data_size": 65536 00:09:16.179 } 00:09:16.179 ] 00:09:16.179 }' 00:09:16.179 12:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.179 12:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.749 [2024-09-30 12:26:28.404308] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.749 "name": "Existed_Raid", 00:09:16.749 "aliases": [ 00:09:16.749 "13de0357-5d98-42e7-a610-5bd1da2086fa" 00:09:16.749 ], 00:09:16.749 "product_name": "Raid Volume", 00:09:16.749 "block_size": 512, 00:09:16.749 "num_blocks": 196608, 00:09:16.749 "uuid": "13de0357-5d98-42e7-a610-5bd1da2086fa", 00:09:16.749 "assigned_rate_limits": { 00:09:16.749 "rw_ios_per_sec": 0, 00:09:16.749 "rw_mbytes_per_sec": 0, 00:09:16.749 "r_mbytes_per_sec": 0, 00:09:16.749 "w_mbytes_per_sec": 0 00:09:16.749 }, 00:09:16.749 "claimed": false, 00:09:16.749 "zoned": false, 00:09:16.749 "supported_io_types": { 00:09:16.749 "read": true, 00:09:16.749 "write": true, 00:09:16.749 "unmap": true, 00:09:16.749 "flush": true, 00:09:16.749 "reset": true, 00:09:16.749 "nvme_admin": false, 00:09:16.749 "nvme_io": false, 00:09:16.749 "nvme_io_md": false, 00:09:16.749 "write_zeroes": true, 00:09:16.749 "zcopy": false, 00:09:16.749 "get_zone_info": false, 00:09:16.749 "zone_management": false, 00:09:16.749 "zone_append": false, 00:09:16.749 "compare": false, 00:09:16.749 "compare_and_write": false, 00:09:16.749 "abort": false, 00:09:16.749 "seek_hole": false, 00:09:16.749 "seek_data": false, 00:09:16.749 "copy": false, 00:09:16.749 "nvme_iov_md": false 00:09:16.749 }, 00:09:16.749 "memory_domains": [ 00:09:16.749 { 00:09:16.749 "dma_device_id": "system", 00:09:16.749 "dma_device_type": 1 00:09:16.749 }, 00:09:16.749 { 00:09:16.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.749 "dma_device_type": 2 00:09:16.749 }, 00:09:16.749 { 00:09:16.749 "dma_device_id": "system", 00:09:16.749 "dma_device_type": 1 00:09:16.749 }, 00:09:16.749 { 00:09:16.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.749 "dma_device_type": 2 00:09:16.749 }, 00:09:16.749 { 00:09:16.749 "dma_device_id": "system", 00:09:16.749 "dma_device_type": 1 00:09:16.749 }, 00:09:16.749 { 00:09:16.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.749 "dma_device_type": 2 00:09:16.749 } 00:09:16.749 ], 00:09:16.749 "driver_specific": { 00:09:16.749 "raid": { 00:09:16.749 "uuid": "13de0357-5d98-42e7-a610-5bd1da2086fa", 00:09:16.749 "strip_size_kb": 64, 00:09:16.749 "state": "online", 00:09:16.749 "raid_level": "raid0", 00:09:16.749 "superblock": false, 00:09:16.749 "num_base_bdevs": 3, 00:09:16.749 "num_base_bdevs_discovered": 3, 00:09:16.749 "num_base_bdevs_operational": 3, 00:09:16.749 "base_bdevs_list": [ 00:09:16.749 { 00:09:16.749 "name": "BaseBdev1", 00:09:16.749 "uuid": "b9de1bc7-effd-4f9b-a14e-d15e02a95c46", 00:09:16.749 "is_configured": true, 00:09:16.749 "data_offset": 0, 00:09:16.749 "data_size": 65536 00:09:16.749 }, 00:09:16.749 { 00:09:16.749 "name": "BaseBdev2", 00:09:16.749 "uuid": "2392877a-8758-40d4-b571-5e2e821e1f82", 00:09:16.749 "is_configured": true, 00:09:16.749 "data_offset": 0, 00:09:16.749 "data_size": 65536 00:09:16.749 }, 00:09:16.749 { 00:09:16.749 "name": "BaseBdev3", 00:09:16.749 "uuid": "3dcc10fa-ec05-4bf2-a609-189dc1bdb603", 00:09:16.749 "is_configured": true, 00:09:16.749 "data_offset": 0, 00:09:16.749 "data_size": 65536 00:09:16.749 } 00:09:16.749 ] 00:09:16.749 } 00:09:16.749 } 00:09:16.749 }' 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.749 BaseBdev2 00:09:16.749 BaseBdev3' 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.749 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.009 [2024-09-30 12:26:28.655651] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:17.009 [2024-09-30 12:26:28.655689] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.009 [2024-09-30 12:26:28.655763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.009 "name": "Existed_Raid", 00:09:17.009 "uuid": "13de0357-5d98-42e7-a610-5bd1da2086fa", 00:09:17.009 "strip_size_kb": 64, 00:09:17.009 "state": "offline", 00:09:17.009 "raid_level": "raid0", 00:09:17.009 "superblock": false, 00:09:17.009 "num_base_bdevs": 3, 00:09:17.009 "num_base_bdevs_discovered": 2, 00:09:17.009 "num_base_bdevs_operational": 2, 00:09:17.009 "base_bdevs_list": [ 00:09:17.009 { 00:09:17.009 "name": null, 00:09:17.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.009 "is_configured": false, 00:09:17.009 "data_offset": 0, 00:09:17.009 "data_size": 65536 00:09:17.009 }, 00:09:17.009 { 00:09:17.009 "name": "BaseBdev2", 00:09:17.009 "uuid": "2392877a-8758-40d4-b571-5e2e821e1f82", 00:09:17.009 "is_configured": true, 00:09:17.009 "data_offset": 0, 00:09:17.009 "data_size": 65536 00:09:17.009 }, 00:09:17.009 { 00:09:17.009 "name": "BaseBdev3", 00:09:17.009 "uuid": "3dcc10fa-ec05-4bf2-a609-189dc1bdb603", 00:09:17.009 "is_configured": true, 00:09:17.009 "data_offset": 0, 00:09:17.009 "data_size": 65536 00:09:17.009 } 00:09:17.009 ] 00:09:17.009 }' 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.009 12:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.268 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:17.268 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.527 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.528 [2024-09-30 12:26:29.216627] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.528 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.528 [2024-09-30 12:26:29.361261] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:17.528 [2024-09-30 12:26:29.361323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.788 BaseBdev2 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.788 [ 00:09:17.788 { 00:09:17.788 "name": "BaseBdev2", 00:09:17.788 "aliases": [ 00:09:17.788 "c2aa368b-1bb1-4c0f-9745-fc34cda52691" 00:09:17.788 ], 00:09:17.788 "product_name": "Malloc disk", 00:09:17.788 "block_size": 512, 00:09:17.788 "num_blocks": 65536, 00:09:17.788 "uuid": "c2aa368b-1bb1-4c0f-9745-fc34cda52691", 00:09:17.788 "assigned_rate_limits": { 00:09:17.788 "rw_ios_per_sec": 0, 00:09:17.788 "rw_mbytes_per_sec": 0, 00:09:17.788 "r_mbytes_per_sec": 0, 00:09:17.788 "w_mbytes_per_sec": 0 00:09:17.788 }, 00:09:17.788 "claimed": false, 00:09:17.788 "zoned": false, 00:09:17.788 "supported_io_types": { 00:09:17.788 "read": true, 00:09:17.788 "write": true, 00:09:17.788 "unmap": true, 00:09:17.788 "flush": true, 00:09:17.788 "reset": true, 00:09:17.788 "nvme_admin": false, 00:09:17.788 "nvme_io": false, 00:09:17.788 "nvme_io_md": false, 00:09:17.788 "write_zeroes": true, 00:09:17.788 "zcopy": true, 00:09:17.788 "get_zone_info": false, 00:09:17.788 "zone_management": false, 00:09:17.788 "zone_append": false, 00:09:17.788 "compare": false, 00:09:17.788 "compare_and_write": false, 00:09:17.788 "abort": true, 00:09:17.788 "seek_hole": false, 00:09:17.788 "seek_data": false, 00:09:17.788 "copy": true, 00:09:17.788 "nvme_iov_md": false 00:09:17.788 }, 00:09:17.788 "memory_domains": [ 00:09:17.788 { 00:09:17.788 "dma_device_id": "system", 00:09:17.788 "dma_device_type": 1 00:09:17.788 }, 00:09:17.788 { 00:09:17.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.788 "dma_device_type": 2 00:09:17.788 } 00:09:17.788 ], 00:09:17.788 "driver_specific": {} 00:09:17.788 } 00:09:17.788 ] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.788 BaseBdev3 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.788 [ 00:09:17.788 { 00:09:17.788 "name": "BaseBdev3", 00:09:17.788 "aliases": [ 00:09:17.788 "0c9cff6c-2a6b-4f84-9254-7a135d254761" 00:09:17.788 ], 00:09:17.788 "product_name": "Malloc disk", 00:09:17.788 "block_size": 512, 00:09:17.788 "num_blocks": 65536, 00:09:17.788 "uuid": "0c9cff6c-2a6b-4f84-9254-7a135d254761", 00:09:17.788 "assigned_rate_limits": { 00:09:17.788 "rw_ios_per_sec": 0, 00:09:17.788 "rw_mbytes_per_sec": 0, 00:09:17.788 "r_mbytes_per_sec": 0, 00:09:17.788 "w_mbytes_per_sec": 0 00:09:17.788 }, 00:09:17.788 "claimed": false, 00:09:17.788 "zoned": false, 00:09:17.788 "supported_io_types": { 00:09:17.788 "read": true, 00:09:17.788 "write": true, 00:09:17.788 "unmap": true, 00:09:17.788 "flush": true, 00:09:17.788 "reset": true, 00:09:17.788 "nvme_admin": false, 00:09:17.788 "nvme_io": false, 00:09:17.788 "nvme_io_md": false, 00:09:17.788 "write_zeroes": true, 00:09:17.788 "zcopy": true, 00:09:17.788 "get_zone_info": false, 00:09:17.788 "zone_management": false, 00:09:17.788 "zone_append": false, 00:09:17.788 "compare": false, 00:09:17.788 "compare_and_write": false, 00:09:17.788 "abort": true, 00:09:17.788 "seek_hole": false, 00:09:17.788 "seek_data": false, 00:09:17.788 "copy": true, 00:09:17.788 "nvme_iov_md": false 00:09:17.788 }, 00:09:17.788 "memory_domains": [ 00:09:17.788 { 00:09:17.788 "dma_device_id": "system", 00:09:17.788 "dma_device_type": 1 00:09:17.788 }, 00:09:17.788 { 00:09:17.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.788 "dma_device_type": 2 00:09:17.788 } 00:09:17.788 ], 00:09:17.788 "driver_specific": {} 00:09:17.788 } 00:09:17.788 ] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.788 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.789 [2024-09-30 12:26:29.674967] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.789 [2024-09-30 12:26:29.675012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.789 [2024-09-30 12:26:29.675035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.789 [2024-09-30 12:26:29.676826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.789 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.049 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.049 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.049 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.049 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.049 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.049 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.049 "name": "Existed_Raid", 00:09:18.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.049 "strip_size_kb": 64, 00:09:18.049 "state": "configuring", 00:09:18.049 "raid_level": "raid0", 00:09:18.049 "superblock": false, 00:09:18.049 "num_base_bdevs": 3, 00:09:18.049 "num_base_bdevs_discovered": 2, 00:09:18.049 "num_base_bdevs_operational": 3, 00:09:18.049 "base_bdevs_list": [ 00:09:18.049 { 00:09:18.049 "name": "BaseBdev1", 00:09:18.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.049 "is_configured": false, 00:09:18.049 "data_offset": 0, 00:09:18.049 "data_size": 0 00:09:18.049 }, 00:09:18.049 { 00:09:18.049 "name": "BaseBdev2", 00:09:18.049 "uuid": "c2aa368b-1bb1-4c0f-9745-fc34cda52691", 00:09:18.049 "is_configured": true, 00:09:18.049 "data_offset": 0, 00:09:18.049 "data_size": 65536 00:09:18.049 }, 00:09:18.049 { 00:09:18.049 "name": "BaseBdev3", 00:09:18.049 "uuid": "0c9cff6c-2a6b-4f84-9254-7a135d254761", 00:09:18.049 "is_configured": true, 00:09:18.049 "data_offset": 0, 00:09:18.049 "data_size": 65536 00:09:18.049 } 00:09:18.049 ] 00:09:18.049 }' 00:09:18.049 12:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.049 12:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.309 [2024-09-30 12:26:30.042299] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.309 "name": "Existed_Raid", 00:09:18.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.309 "strip_size_kb": 64, 00:09:18.309 "state": "configuring", 00:09:18.309 "raid_level": "raid0", 00:09:18.309 "superblock": false, 00:09:18.309 "num_base_bdevs": 3, 00:09:18.309 "num_base_bdevs_discovered": 1, 00:09:18.309 "num_base_bdevs_operational": 3, 00:09:18.309 "base_bdevs_list": [ 00:09:18.309 { 00:09:18.309 "name": "BaseBdev1", 00:09:18.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.309 "is_configured": false, 00:09:18.309 "data_offset": 0, 00:09:18.309 "data_size": 0 00:09:18.309 }, 00:09:18.309 { 00:09:18.309 "name": null, 00:09:18.309 "uuid": "c2aa368b-1bb1-4c0f-9745-fc34cda52691", 00:09:18.309 "is_configured": false, 00:09:18.309 "data_offset": 0, 00:09:18.309 "data_size": 65536 00:09:18.309 }, 00:09:18.309 { 00:09:18.309 "name": "BaseBdev3", 00:09:18.309 "uuid": "0c9cff6c-2a6b-4f84-9254-7a135d254761", 00:09:18.309 "is_configured": true, 00:09:18.309 "data_offset": 0, 00:09:18.309 "data_size": 65536 00:09:18.309 } 00:09:18.309 ] 00:09:18.309 }' 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.309 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.879 [2024-09-30 12:26:30.559044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.879 BaseBdev1 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.879 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.879 [ 00:09:18.879 { 00:09:18.879 "name": "BaseBdev1", 00:09:18.879 "aliases": [ 00:09:18.879 "4e35996d-7bb0-4d35-a4ec-676c04e214fa" 00:09:18.879 ], 00:09:18.879 "product_name": "Malloc disk", 00:09:18.879 "block_size": 512, 00:09:18.879 "num_blocks": 65536, 00:09:18.879 "uuid": "4e35996d-7bb0-4d35-a4ec-676c04e214fa", 00:09:18.879 "assigned_rate_limits": { 00:09:18.879 "rw_ios_per_sec": 0, 00:09:18.879 "rw_mbytes_per_sec": 0, 00:09:18.879 "r_mbytes_per_sec": 0, 00:09:18.879 "w_mbytes_per_sec": 0 00:09:18.879 }, 00:09:18.879 "claimed": true, 00:09:18.879 "claim_type": "exclusive_write", 00:09:18.879 "zoned": false, 00:09:18.879 "supported_io_types": { 00:09:18.879 "read": true, 00:09:18.879 "write": true, 00:09:18.879 "unmap": true, 00:09:18.879 "flush": true, 00:09:18.879 "reset": true, 00:09:18.879 "nvme_admin": false, 00:09:18.879 "nvme_io": false, 00:09:18.879 "nvme_io_md": false, 00:09:18.879 "write_zeroes": true, 00:09:18.879 "zcopy": true, 00:09:18.879 "get_zone_info": false, 00:09:18.879 "zone_management": false, 00:09:18.879 "zone_append": false, 00:09:18.879 "compare": false, 00:09:18.879 "compare_and_write": false, 00:09:18.879 "abort": true, 00:09:18.879 "seek_hole": false, 00:09:18.879 "seek_data": false, 00:09:18.879 "copy": true, 00:09:18.879 "nvme_iov_md": false 00:09:18.879 }, 00:09:18.879 "memory_domains": [ 00:09:18.879 { 00:09:18.879 "dma_device_id": "system", 00:09:18.879 "dma_device_type": 1 00:09:18.879 }, 00:09:18.879 { 00:09:18.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.879 "dma_device_type": 2 00:09:18.879 } 00:09:18.879 ], 00:09:18.879 "driver_specific": {} 00:09:18.880 } 00:09:18.880 ] 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.880 "name": "Existed_Raid", 00:09:18.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.880 "strip_size_kb": 64, 00:09:18.880 "state": "configuring", 00:09:18.880 "raid_level": "raid0", 00:09:18.880 "superblock": false, 00:09:18.880 "num_base_bdevs": 3, 00:09:18.880 "num_base_bdevs_discovered": 2, 00:09:18.880 "num_base_bdevs_operational": 3, 00:09:18.880 "base_bdevs_list": [ 00:09:18.880 { 00:09:18.880 "name": "BaseBdev1", 00:09:18.880 "uuid": "4e35996d-7bb0-4d35-a4ec-676c04e214fa", 00:09:18.880 "is_configured": true, 00:09:18.880 "data_offset": 0, 00:09:18.880 "data_size": 65536 00:09:18.880 }, 00:09:18.880 { 00:09:18.880 "name": null, 00:09:18.880 "uuid": "c2aa368b-1bb1-4c0f-9745-fc34cda52691", 00:09:18.880 "is_configured": false, 00:09:18.880 "data_offset": 0, 00:09:18.880 "data_size": 65536 00:09:18.880 }, 00:09:18.880 { 00:09:18.880 "name": "BaseBdev3", 00:09:18.880 "uuid": "0c9cff6c-2a6b-4f84-9254-7a135d254761", 00:09:18.880 "is_configured": true, 00:09:18.880 "data_offset": 0, 00:09:18.880 "data_size": 65536 00:09:18.880 } 00:09:18.880 ] 00:09:18.880 }' 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.880 12:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.140 12:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.140 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:19.140 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.140 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.140 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 [2024-09-30 12:26:31.050257] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.400 "name": "Existed_Raid", 00:09:19.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.400 "strip_size_kb": 64, 00:09:19.400 "state": "configuring", 00:09:19.400 "raid_level": "raid0", 00:09:19.400 "superblock": false, 00:09:19.400 "num_base_bdevs": 3, 00:09:19.400 "num_base_bdevs_discovered": 1, 00:09:19.400 "num_base_bdevs_operational": 3, 00:09:19.400 "base_bdevs_list": [ 00:09:19.400 { 00:09:19.400 "name": "BaseBdev1", 00:09:19.400 "uuid": "4e35996d-7bb0-4d35-a4ec-676c04e214fa", 00:09:19.400 "is_configured": true, 00:09:19.400 "data_offset": 0, 00:09:19.400 "data_size": 65536 00:09:19.400 }, 00:09:19.400 { 00:09:19.400 "name": null, 00:09:19.400 "uuid": "c2aa368b-1bb1-4c0f-9745-fc34cda52691", 00:09:19.400 "is_configured": false, 00:09:19.400 "data_offset": 0, 00:09:19.400 "data_size": 65536 00:09:19.400 }, 00:09:19.400 { 00:09:19.400 "name": null, 00:09:19.400 "uuid": "0c9cff6c-2a6b-4f84-9254-7a135d254761", 00:09:19.400 "is_configured": false, 00:09:19.400 "data_offset": 0, 00:09:19.400 "data_size": 65536 00:09:19.400 } 00:09:19.400 ] 00:09:19.400 }' 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.400 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.660 [2024-09-30 12:26:31.537488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.660 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.661 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.661 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.661 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.661 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.661 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.661 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.661 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.661 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.661 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.920 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.920 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.920 "name": "Existed_Raid", 00:09:19.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.920 "strip_size_kb": 64, 00:09:19.920 "state": "configuring", 00:09:19.920 "raid_level": "raid0", 00:09:19.920 "superblock": false, 00:09:19.920 "num_base_bdevs": 3, 00:09:19.920 "num_base_bdevs_discovered": 2, 00:09:19.920 "num_base_bdevs_operational": 3, 00:09:19.920 "base_bdevs_list": [ 00:09:19.920 { 00:09:19.920 "name": "BaseBdev1", 00:09:19.920 "uuid": "4e35996d-7bb0-4d35-a4ec-676c04e214fa", 00:09:19.920 "is_configured": true, 00:09:19.920 "data_offset": 0, 00:09:19.920 "data_size": 65536 00:09:19.920 }, 00:09:19.920 { 00:09:19.920 "name": null, 00:09:19.920 "uuid": "c2aa368b-1bb1-4c0f-9745-fc34cda52691", 00:09:19.920 "is_configured": false, 00:09:19.920 "data_offset": 0, 00:09:19.920 "data_size": 65536 00:09:19.920 }, 00:09:19.920 { 00:09:19.920 "name": "BaseBdev3", 00:09:19.920 "uuid": "0c9cff6c-2a6b-4f84-9254-7a135d254761", 00:09:19.920 "is_configured": true, 00:09:19.920 "data_offset": 0, 00:09:19.920 "data_size": 65536 00:09:19.920 } 00:09:19.920 ] 00:09:19.920 }' 00:09:19.920 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.920 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.180 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.180 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.180 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.180 12:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:20.180 12:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.180 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:20.180 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:20.180 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.180 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.180 [2024-09-30 12:26:32.032682] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.441 "name": "Existed_Raid", 00:09:20.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.441 "strip_size_kb": 64, 00:09:20.441 "state": "configuring", 00:09:20.441 "raid_level": "raid0", 00:09:20.441 "superblock": false, 00:09:20.441 "num_base_bdevs": 3, 00:09:20.441 "num_base_bdevs_discovered": 1, 00:09:20.441 "num_base_bdevs_operational": 3, 00:09:20.441 "base_bdevs_list": [ 00:09:20.441 { 00:09:20.441 "name": null, 00:09:20.441 "uuid": "4e35996d-7bb0-4d35-a4ec-676c04e214fa", 00:09:20.441 "is_configured": false, 00:09:20.441 "data_offset": 0, 00:09:20.441 "data_size": 65536 00:09:20.441 }, 00:09:20.441 { 00:09:20.441 "name": null, 00:09:20.441 "uuid": "c2aa368b-1bb1-4c0f-9745-fc34cda52691", 00:09:20.441 "is_configured": false, 00:09:20.441 "data_offset": 0, 00:09:20.441 "data_size": 65536 00:09:20.441 }, 00:09:20.441 { 00:09:20.441 "name": "BaseBdev3", 00:09:20.441 "uuid": "0c9cff6c-2a6b-4f84-9254-7a135d254761", 00:09:20.441 "is_configured": true, 00:09:20.441 "data_offset": 0, 00:09:20.441 "data_size": 65536 00:09:20.441 } 00:09:20.441 ] 00:09:20.441 }' 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.441 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.701 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.701 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.701 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:20.701 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.701 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.961 [2024-09-30 12:26:32.627143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.961 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.961 "name": "Existed_Raid", 00:09:20.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.961 "strip_size_kb": 64, 00:09:20.961 "state": "configuring", 00:09:20.961 "raid_level": "raid0", 00:09:20.961 "superblock": false, 00:09:20.961 "num_base_bdevs": 3, 00:09:20.961 "num_base_bdevs_discovered": 2, 00:09:20.961 "num_base_bdevs_operational": 3, 00:09:20.961 "base_bdevs_list": [ 00:09:20.961 { 00:09:20.961 "name": null, 00:09:20.961 "uuid": "4e35996d-7bb0-4d35-a4ec-676c04e214fa", 00:09:20.961 "is_configured": false, 00:09:20.961 "data_offset": 0, 00:09:20.961 "data_size": 65536 00:09:20.962 }, 00:09:20.962 { 00:09:20.962 "name": "BaseBdev2", 00:09:20.962 "uuid": "c2aa368b-1bb1-4c0f-9745-fc34cda52691", 00:09:20.962 "is_configured": true, 00:09:20.962 "data_offset": 0, 00:09:20.962 "data_size": 65536 00:09:20.962 }, 00:09:20.962 { 00:09:20.962 "name": "BaseBdev3", 00:09:20.962 "uuid": "0c9cff6c-2a6b-4f84-9254-7a135d254761", 00:09:20.962 "is_configured": true, 00:09:20.962 "data_offset": 0, 00:09:20.962 "data_size": 65536 00:09:20.962 } 00:09:20.962 ] 00:09:20.962 }' 00:09:20.962 12:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.962 12:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.221 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:21.221 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.221 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.221 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.221 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.221 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:21.221 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.221 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:21.221 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.221 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4e35996d-7bb0-4d35-a4ec-676c04e214fa 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.481 [2024-09-30 12:26:33.187226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:21.481 [2024-09-30 12:26:33.187269] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:21.481 [2024-09-30 12:26:33.187280] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:21.481 [2024-09-30 12:26:33.187551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:21.481 [2024-09-30 12:26:33.187701] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:21.481 [2024-09-30 12:26:33.187710] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:21.481 [2024-09-30 12:26:33.188032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.481 NewBaseBdev 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.481 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.481 [ 00:09:21.481 { 00:09:21.481 "name": "NewBaseBdev", 00:09:21.481 "aliases": [ 00:09:21.481 "4e35996d-7bb0-4d35-a4ec-676c04e214fa" 00:09:21.481 ], 00:09:21.481 "product_name": "Malloc disk", 00:09:21.481 "block_size": 512, 00:09:21.481 "num_blocks": 65536, 00:09:21.481 "uuid": "4e35996d-7bb0-4d35-a4ec-676c04e214fa", 00:09:21.481 "assigned_rate_limits": { 00:09:21.481 "rw_ios_per_sec": 0, 00:09:21.481 "rw_mbytes_per_sec": 0, 00:09:21.481 "r_mbytes_per_sec": 0, 00:09:21.481 "w_mbytes_per_sec": 0 00:09:21.481 }, 00:09:21.481 "claimed": true, 00:09:21.481 "claim_type": "exclusive_write", 00:09:21.481 "zoned": false, 00:09:21.481 "supported_io_types": { 00:09:21.481 "read": true, 00:09:21.481 "write": true, 00:09:21.481 "unmap": true, 00:09:21.481 "flush": true, 00:09:21.481 "reset": true, 00:09:21.481 "nvme_admin": false, 00:09:21.481 "nvme_io": false, 00:09:21.481 "nvme_io_md": false, 00:09:21.481 "write_zeroes": true, 00:09:21.481 "zcopy": true, 00:09:21.481 "get_zone_info": false, 00:09:21.481 "zone_management": false, 00:09:21.481 "zone_append": false, 00:09:21.481 "compare": false, 00:09:21.481 "compare_and_write": false, 00:09:21.481 "abort": true, 00:09:21.481 "seek_hole": false, 00:09:21.481 "seek_data": false, 00:09:21.481 "copy": true, 00:09:21.481 "nvme_iov_md": false 00:09:21.481 }, 00:09:21.481 "memory_domains": [ 00:09:21.481 { 00:09:21.481 "dma_device_id": "system", 00:09:21.481 "dma_device_type": 1 00:09:21.481 }, 00:09:21.481 { 00:09:21.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.481 "dma_device_type": 2 00:09:21.481 } 00:09:21.481 ], 00:09:21.481 "driver_specific": {} 00:09:21.481 } 00:09:21.481 ] 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.482 "name": "Existed_Raid", 00:09:21.482 "uuid": "7876e802-203f-4f16-982a-b0769721c35a", 00:09:21.482 "strip_size_kb": 64, 00:09:21.482 "state": "online", 00:09:21.482 "raid_level": "raid0", 00:09:21.482 "superblock": false, 00:09:21.482 "num_base_bdevs": 3, 00:09:21.482 "num_base_bdevs_discovered": 3, 00:09:21.482 "num_base_bdevs_operational": 3, 00:09:21.482 "base_bdevs_list": [ 00:09:21.482 { 00:09:21.482 "name": "NewBaseBdev", 00:09:21.482 "uuid": "4e35996d-7bb0-4d35-a4ec-676c04e214fa", 00:09:21.482 "is_configured": true, 00:09:21.482 "data_offset": 0, 00:09:21.482 "data_size": 65536 00:09:21.482 }, 00:09:21.482 { 00:09:21.482 "name": "BaseBdev2", 00:09:21.482 "uuid": "c2aa368b-1bb1-4c0f-9745-fc34cda52691", 00:09:21.482 "is_configured": true, 00:09:21.482 "data_offset": 0, 00:09:21.482 "data_size": 65536 00:09:21.482 }, 00:09:21.482 { 00:09:21.482 "name": "BaseBdev3", 00:09:21.482 "uuid": "0c9cff6c-2a6b-4f84-9254-7a135d254761", 00:09:21.482 "is_configured": true, 00:09:21.482 "data_offset": 0, 00:09:21.482 "data_size": 65536 00:09:21.482 } 00:09:21.482 ] 00:09:21.482 }' 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.482 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.742 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.742 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.742 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.742 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.742 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.742 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.742 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.742 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.742 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.742 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.002 [2024-09-30 12:26:33.638774] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.002 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.002 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.002 "name": "Existed_Raid", 00:09:22.002 "aliases": [ 00:09:22.002 "7876e802-203f-4f16-982a-b0769721c35a" 00:09:22.002 ], 00:09:22.002 "product_name": "Raid Volume", 00:09:22.002 "block_size": 512, 00:09:22.002 "num_blocks": 196608, 00:09:22.002 "uuid": "7876e802-203f-4f16-982a-b0769721c35a", 00:09:22.002 "assigned_rate_limits": { 00:09:22.002 "rw_ios_per_sec": 0, 00:09:22.002 "rw_mbytes_per_sec": 0, 00:09:22.002 "r_mbytes_per_sec": 0, 00:09:22.002 "w_mbytes_per_sec": 0 00:09:22.002 }, 00:09:22.002 "claimed": false, 00:09:22.002 "zoned": false, 00:09:22.002 "supported_io_types": { 00:09:22.002 "read": true, 00:09:22.002 "write": true, 00:09:22.002 "unmap": true, 00:09:22.002 "flush": true, 00:09:22.002 "reset": true, 00:09:22.002 "nvme_admin": false, 00:09:22.002 "nvme_io": false, 00:09:22.002 "nvme_io_md": false, 00:09:22.002 "write_zeroes": true, 00:09:22.002 "zcopy": false, 00:09:22.002 "get_zone_info": false, 00:09:22.002 "zone_management": false, 00:09:22.002 "zone_append": false, 00:09:22.002 "compare": false, 00:09:22.002 "compare_and_write": false, 00:09:22.002 "abort": false, 00:09:22.002 "seek_hole": false, 00:09:22.002 "seek_data": false, 00:09:22.002 "copy": false, 00:09:22.002 "nvme_iov_md": false 00:09:22.002 }, 00:09:22.002 "memory_domains": [ 00:09:22.002 { 00:09:22.002 "dma_device_id": "system", 00:09:22.002 "dma_device_type": 1 00:09:22.002 }, 00:09:22.002 { 00:09:22.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.002 "dma_device_type": 2 00:09:22.002 }, 00:09:22.002 { 00:09:22.002 "dma_device_id": "system", 00:09:22.002 "dma_device_type": 1 00:09:22.002 }, 00:09:22.002 { 00:09:22.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.002 "dma_device_type": 2 00:09:22.002 }, 00:09:22.002 { 00:09:22.002 "dma_device_id": "system", 00:09:22.002 "dma_device_type": 1 00:09:22.002 }, 00:09:22.002 { 00:09:22.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.002 "dma_device_type": 2 00:09:22.002 } 00:09:22.002 ], 00:09:22.002 "driver_specific": { 00:09:22.002 "raid": { 00:09:22.002 "uuid": "7876e802-203f-4f16-982a-b0769721c35a", 00:09:22.002 "strip_size_kb": 64, 00:09:22.002 "state": "online", 00:09:22.003 "raid_level": "raid0", 00:09:22.003 "superblock": false, 00:09:22.003 "num_base_bdevs": 3, 00:09:22.003 "num_base_bdevs_discovered": 3, 00:09:22.003 "num_base_bdevs_operational": 3, 00:09:22.003 "base_bdevs_list": [ 00:09:22.003 { 00:09:22.003 "name": "NewBaseBdev", 00:09:22.003 "uuid": "4e35996d-7bb0-4d35-a4ec-676c04e214fa", 00:09:22.003 "is_configured": true, 00:09:22.003 "data_offset": 0, 00:09:22.003 "data_size": 65536 00:09:22.003 }, 00:09:22.003 { 00:09:22.003 "name": "BaseBdev2", 00:09:22.003 "uuid": "c2aa368b-1bb1-4c0f-9745-fc34cda52691", 00:09:22.003 "is_configured": true, 00:09:22.003 "data_offset": 0, 00:09:22.003 "data_size": 65536 00:09:22.003 }, 00:09:22.003 { 00:09:22.003 "name": "BaseBdev3", 00:09:22.003 "uuid": "0c9cff6c-2a6b-4f84-9254-7a135d254761", 00:09:22.003 "is_configured": true, 00:09:22.003 "data_offset": 0, 00:09:22.003 "data_size": 65536 00:09:22.003 } 00:09:22.003 ] 00:09:22.003 } 00:09:22.003 } 00:09:22.003 }' 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:22.003 BaseBdev2 00:09:22.003 BaseBdev3' 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.003 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.263 [2024-09-30 12:26:33.921962] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.263 [2024-09-30 12:26:33.922040] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.263 [2024-09-30 12:26:33.922137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.263 [2024-09-30 12:26:33.922210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.263 [2024-09-30 12:26:33.922266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63715 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63715 ']' 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63715 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63715 00:09:22.263 killing process with pid 63715 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63715' 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63715 00:09:22.263 [2024-09-30 12:26:33.956570] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.263 12:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63715 00:09:22.523 [2024-09-30 12:26:34.245529] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:23.905 00:09:23.905 real 0m10.461s 00:09:23.905 user 0m16.545s 00:09:23.905 sys 0m1.717s 00:09:23.905 ************************************ 00:09:23.905 END TEST raid_state_function_test 00:09:23.905 ************************************ 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.905 12:26:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:23.905 12:26:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:23.905 12:26:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.905 12:26:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.905 ************************************ 00:09:23.905 START TEST raid_state_function_test_sb 00:09:23.905 ************************************ 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64336 00:09:23.905 Process raid pid: 64336 00:09:23.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64336' 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64336 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64336 ']' 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.905 12:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.905 [2024-09-30 12:26:35.661590] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:23.905 [2024-09-30 12:26:35.661810] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.165 [2024-09-30 12:26:35.822028] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.165 [2024-09-30 12:26:36.015017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.424 [2024-09-30 12:26:36.216814] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.424 [2024-09-30 12:26:36.216984] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.684 [2024-09-30 12:26:36.481902] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.684 [2024-09-30 12:26:36.482051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.684 [2024-09-30 12:26:36.482099] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.684 [2024-09-30 12:26:36.482129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.684 [2024-09-30 12:26:36.482153] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.684 [2024-09-30 12:26:36.482197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.684 "name": "Existed_Raid", 00:09:24.684 "uuid": "725849c2-315a-477d-b974-96287524a456", 00:09:24.684 "strip_size_kb": 64, 00:09:24.684 "state": "configuring", 00:09:24.684 "raid_level": "raid0", 00:09:24.684 "superblock": true, 00:09:24.684 "num_base_bdevs": 3, 00:09:24.684 "num_base_bdevs_discovered": 0, 00:09:24.684 "num_base_bdevs_operational": 3, 00:09:24.684 "base_bdevs_list": [ 00:09:24.684 { 00:09:24.684 "name": "BaseBdev1", 00:09:24.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.684 "is_configured": false, 00:09:24.684 "data_offset": 0, 00:09:24.684 "data_size": 0 00:09:24.684 }, 00:09:24.684 { 00:09:24.684 "name": "BaseBdev2", 00:09:24.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.684 "is_configured": false, 00:09:24.684 "data_offset": 0, 00:09:24.684 "data_size": 0 00:09:24.684 }, 00:09:24.684 { 00:09:24.684 "name": "BaseBdev3", 00:09:24.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.684 "is_configured": false, 00:09:24.684 "data_offset": 0, 00:09:24.684 "data_size": 0 00:09:24.684 } 00:09:24.684 ] 00:09:24.684 }' 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.684 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.254 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.254 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.254 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.254 [2024-09-30 12:26:36.925047] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.254 [2024-09-30 12:26:36.925186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:25.254 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.254 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.254 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.254 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.255 [2024-09-30 12:26:36.937034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.255 [2024-09-30 12:26:36.937143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.255 [2024-09-30 12:26:36.937177] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.255 [2024-09-30 12:26:36.937207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.255 [2024-09-30 12:26:36.937230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.255 [2024-09-30 12:26:36.937258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.255 [2024-09-30 12:26:36.995155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.255 BaseBdev1 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.255 12:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.255 [ 00:09:25.255 { 00:09:25.255 "name": "BaseBdev1", 00:09:25.255 "aliases": [ 00:09:25.255 "cbeb888a-3687-4e41-a280-a175c6b734b7" 00:09:25.255 ], 00:09:25.255 "product_name": "Malloc disk", 00:09:25.255 "block_size": 512, 00:09:25.255 "num_blocks": 65536, 00:09:25.255 "uuid": "cbeb888a-3687-4e41-a280-a175c6b734b7", 00:09:25.255 "assigned_rate_limits": { 00:09:25.255 "rw_ios_per_sec": 0, 00:09:25.255 "rw_mbytes_per_sec": 0, 00:09:25.255 "r_mbytes_per_sec": 0, 00:09:25.255 "w_mbytes_per_sec": 0 00:09:25.255 }, 00:09:25.255 "claimed": true, 00:09:25.255 "claim_type": "exclusive_write", 00:09:25.255 "zoned": false, 00:09:25.255 "supported_io_types": { 00:09:25.255 "read": true, 00:09:25.255 "write": true, 00:09:25.255 "unmap": true, 00:09:25.255 "flush": true, 00:09:25.255 "reset": true, 00:09:25.255 "nvme_admin": false, 00:09:25.255 "nvme_io": false, 00:09:25.255 "nvme_io_md": false, 00:09:25.255 "write_zeroes": true, 00:09:25.255 "zcopy": true, 00:09:25.255 "get_zone_info": false, 00:09:25.255 "zone_management": false, 00:09:25.255 "zone_append": false, 00:09:25.255 "compare": false, 00:09:25.255 "compare_and_write": false, 00:09:25.255 "abort": true, 00:09:25.255 "seek_hole": false, 00:09:25.255 "seek_data": false, 00:09:25.255 "copy": true, 00:09:25.255 "nvme_iov_md": false 00:09:25.255 }, 00:09:25.255 "memory_domains": [ 00:09:25.255 { 00:09:25.255 "dma_device_id": "system", 00:09:25.255 "dma_device_type": 1 00:09:25.255 }, 00:09:25.255 { 00:09:25.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.255 "dma_device_type": 2 00:09:25.255 } 00:09:25.255 ], 00:09:25.255 "driver_specific": {} 00:09:25.255 } 00:09:25.255 ] 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.255 "name": "Existed_Raid", 00:09:25.255 "uuid": "2bccc086-f070-4267-8757-235b3a51f455", 00:09:25.255 "strip_size_kb": 64, 00:09:25.255 "state": "configuring", 00:09:25.255 "raid_level": "raid0", 00:09:25.255 "superblock": true, 00:09:25.255 "num_base_bdevs": 3, 00:09:25.255 "num_base_bdevs_discovered": 1, 00:09:25.255 "num_base_bdevs_operational": 3, 00:09:25.255 "base_bdevs_list": [ 00:09:25.255 { 00:09:25.255 "name": "BaseBdev1", 00:09:25.255 "uuid": "cbeb888a-3687-4e41-a280-a175c6b734b7", 00:09:25.255 "is_configured": true, 00:09:25.255 "data_offset": 2048, 00:09:25.255 "data_size": 63488 00:09:25.255 }, 00:09:25.255 { 00:09:25.255 "name": "BaseBdev2", 00:09:25.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.255 "is_configured": false, 00:09:25.255 "data_offset": 0, 00:09:25.255 "data_size": 0 00:09:25.255 }, 00:09:25.255 { 00:09:25.255 "name": "BaseBdev3", 00:09:25.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.255 "is_configured": false, 00:09:25.255 "data_offset": 0, 00:09:25.255 "data_size": 0 00:09:25.255 } 00:09:25.255 ] 00:09:25.255 }' 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.255 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.824 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.825 [2024-09-30 12:26:37.462439] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.825 [2024-09-30 12:26:37.462608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.825 [2024-09-30 12:26:37.470457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.825 [2024-09-30 12:26:37.472396] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.825 [2024-09-30 12:26:37.472451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.825 [2024-09-30 12:26:37.472477] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.825 [2024-09-30 12:26:37.472489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.825 "name": "Existed_Raid", 00:09:25.825 "uuid": "f2c6b660-0f56-4623-a642-32a74a87d19b", 00:09:25.825 "strip_size_kb": 64, 00:09:25.825 "state": "configuring", 00:09:25.825 "raid_level": "raid0", 00:09:25.825 "superblock": true, 00:09:25.825 "num_base_bdevs": 3, 00:09:25.825 "num_base_bdevs_discovered": 1, 00:09:25.825 "num_base_bdevs_operational": 3, 00:09:25.825 "base_bdevs_list": [ 00:09:25.825 { 00:09:25.825 "name": "BaseBdev1", 00:09:25.825 "uuid": "cbeb888a-3687-4e41-a280-a175c6b734b7", 00:09:25.825 "is_configured": true, 00:09:25.825 "data_offset": 2048, 00:09:25.825 "data_size": 63488 00:09:25.825 }, 00:09:25.825 { 00:09:25.825 "name": "BaseBdev2", 00:09:25.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.825 "is_configured": false, 00:09:25.825 "data_offset": 0, 00:09:25.825 "data_size": 0 00:09:25.825 }, 00:09:25.825 { 00:09:25.825 "name": "BaseBdev3", 00:09:25.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.825 "is_configured": false, 00:09:25.825 "data_offset": 0, 00:09:25.825 "data_size": 0 00:09:25.825 } 00:09:25.825 ] 00:09:25.825 }' 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.825 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.085 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:26.085 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.085 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.085 [2024-09-30 12:26:37.931973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.085 BaseBdev2 00:09:26.085 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.085 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.086 [ 00:09:26.086 { 00:09:26.086 "name": "BaseBdev2", 00:09:26.086 "aliases": [ 00:09:26.086 "48ca66f9-5951-4e36-8a2a-b4823e2037de" 00:09:26.086 ], 00:09:26.086 "product_name": "Malloc disk", 00:09:26.086 "block_size": 512, 00:09:26.086 "num_blocks": 65536, 00:09:26.086 "uuid": "48ca66f9-5951-4e36-8a2a-b4823e2037de", 00:09:26.086 "assigned_rate_limits": { 00:09:26.086 "rw_ios_per_sec": 0, 00:09:26.086 "rw_mbytes_per_sec": 0, 00:09:26.086 "r_mbytes_per_sec": 0, 00:09:26.086 "w_mbytes_per_sec": 0 00:09:26.086 }, 00:09:26.086 "claimed": true, 00:09:26.086 "claim_type": "exclusive_write", 00:09:26.086 "zoned": false, 00:09:26.086 "supported_io_types": { 00:09:26.086 "read": true, 00:09:26.086 "write": true, 00:09:26.086 "unmap": true, 00:09:26.086 "flush": true, 00:09:26.086 "reset": true, 00:09:26.086 "nvme_admin": false, 00:09:26.086 "nvme_io": false, 00:09:26.086 "nvme_io_md": false, 00:09:26.086 "write_zeroes": true, 00:09:26.086 "zcopy": true, 00:09:26.086 "get_zone_info": false, 00:09:26.086 "zone_management": false, 00:09:26.086 "zone_append": false, 00:09:26.086 "compare": false, 00:09:26.086 "compare_and_write": false, 00:09:26.086 "abort": true, 00:09:26.086 "seek_hole": false, 00:09:26.086 "seek_data": false, 00:09:26.086 "copy": true, 00:09:26.086 "nvme_iov_md": false 00:09:26.086 }, 00:09:26.086 "memory_domains": [ 00:09:26.086 { 00:09:26.086 "dma_device_id": "system", 00:09:26.086 "dma_device_type": 1 00:09:26.086 }, 00:09:26.086 { 00:09:26.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.086 "dma_device_type": 2 00:09:26.086 } 00:09:26.086 ], 00:09:26.086 "driver_specific": {} 00:09:26.086 } 00:09:26.086 ] 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.086 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.346 12:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.346 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.346 "name": "Existed_Raid", 00:09:26.346 "uuid": "f2c6b660-0f56-4623-a642-32a74a87d19b", 00:09:26.346 "strip_size_kb": 64, 00:09:26.346 "state": "configuring", 00:09:26.346 "raid_level": "raid0", 00:09:26.346 "superblock": true, 00:09:26.346 "num_base_bdevs": 3, 00:09:26.346 "num_base_bdevs_discovered": 2, 00:09:26.346 "num_base_bdevs_operational": 3, 00:09:26.346 "base_bdevs_list": [ 00:09:26.346 { 00:09:26.346 "name": "BaseBdev1", 00:09:26.346 "uuid": "cbeb888a-3687-4e41-a280-a175c6b734b7", 00:09:26.346 "is_configured": true, 00:09:26.346 "data_offset": 2048, 00:09:26.346 "data_size": 63488 00:09:26.346 }, 00:09:26.346 { 00:09:26.346 "name": "BaseBdev2", 00:09:26.346 "uuid": "48ca66f9-5951-4e36-8a2a-b4823e2037de", 00:09:26.346 "is_configured": true, 00:09:26.346 "data_offset": 2048, 00:09:26.346 "data_size": 63488 00:09:26.346 }, 00:09:26.346 { 00:09:26.346 "name": "BaseBdev3", 00:09:26.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.346 "is_configured": false, 00:09:26.346 "data_offset": 0, 00:09:26.346 "data_size": 0 00:09:26.346 } 00:09:26.346 ] 00:09:26.346 }' 00:09:26.346 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.346 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.606 [2024-09-30 12:26:38.484229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.606 [2024-09-30 12:26:38.484518] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:26.606 [2024-09-30 12:26:38.484542] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:26.606 [2024-09-30 12:26:38.484833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:26.606 [2024-09-30 12:26:38.484987] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:26.606 [2024-09-30 12:26:38.484998] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:26.606 BaseBdev3 00:09:26.606 [2024-09-30 12:26:38.485151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.606 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.866 [ 00:09:26.866 { 00:09:26.866 "name": "BaseBdev3", 00:09:26.866 "aliases": [ 00:09:26.866 "38c520e3-08c5-4bb0-aca3-b00875ad6861" 00:09:26.866 ], 00:09:26.866 "product_name": "Malloc disk", 00:09:26.866 "block_size": 512, 00:09:26.866 "num_blocks": 65536, 00:09:26.866 "uuid": "38c520e3-08c5-4bb0-aca3-b00875ad6861", 00:09:26.866 "assigned_rate_limits": { 00:09:26.866 "rw_ios_per_sec": 0, 00:09:26.866 "rw_mbytes_per_sec": 0, 00:09:26.866 "r_mbytes_per_sec": 0, 00:09:26.866 "w_mbytes_per_sec": 0 00:09:26.866 }, 00:09:26.866 "claimed": true, 00:09:26.866 "claim_type": "exclusive_write", 00:09:26.866 "zoned": false, 00:09:26.866 "supported_io_types": { 00:09:26.866 "read": true, 00:09:26.866 "write": true, 00:09:26.866 "unmap": true, 00:09:26.866 "flush": true, 00:09:26.866 "reset": true, 00:09:26.866 "nvme_admin": false, 00:09:26.866 "nvme_io": false, 00:09:26.866 "nvme_io_md": false, 00:09:26.866 "write_zeroes": true, 00:09:26.866 "zcopy": true, 00:09:26.866 "get_zone_info": false, 00:09:26.866 "zone_management": false, 00:09:26.866 "zone_append": false, 00:09:26.866 "compare": false, 00:09:26.866 "compare_and_write": false, 00:09:26.866 "abort": true, 00:09:26.866 "seek_hole": false, 00:09:26.866 "seek_data": false, 00:09:26.866 "copy": true, 00:09:26.866 "nvme_iov_md": false 00:09:26.866 }, 00:09:26.866 "memory_domains": [ 00:09:26.866 { 00:09:26.866 "dma_device_id": "system", 00:09:26.866 "dma_device_type": 1 00:09:26.866 }, 00:09:26.866 { 00:09:26.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.866 "dma_device_type": 2 00:09:26.866 } 00:09:26.866 ], 00:09:26.866 "driver_specific": {} 00:09:26.866 } 00:09:26.866 ] 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.866 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.867 "name": "Existed_Raid", 00:09:26.867 "uuid": "f2c6b660-0f56-4623-a642-32a74a87d19b", 00:09:26.867 "strip_size_kb": 64, 00:09:26.867 "state": "online", 00:09:26.867 "raid_level": "raid0", 00:09:26.867 "superblock": true, 00:09:26.867 "num_base_bdevs": 3, 00:09:26.867 "num_base_bdevs_discovered": 3, 00:09:26.867 "num_base_bdevs_operational": 3, 00:09:26.867 "base_bdevs_list": [ 00:09:26.867 { 00:09:26.867 "name": "BaseBdev1", 00:09:26.867 "uuid": "cbeb888a-3687-4e41-a280-a175c6b734b7", 00:09:26.867 "is_configured": true, 00:09:26.867 "data_offset": 2048, 00:09:26.867 "data_size": 63488 00:09:26.867 }, 00:09:26.867 { 00:09:26.867 "name": "BaseBdev2", 00:09:26.867 "uuid": "48ca66f9-5951-4e36-8a2a-b4823e2037de", 00:09:26.867 "is_configured": true, 00:09:26.867 "data_offset": 2048, 00:09:26.867 "data_size": 63488 00:09:26.867 }, 00:09:26.867 { 00:09:26.867 "name": "BaseBdev3", 00:09:26.867 "uuid": "38c520e3-08c5-4bb0-aca3-b00875ad6861", 00:09:26.867 "is_configured": true, 00:09:26.867 "data_offset": 2048, 00:09:26.867 "data_size": 63488 00:09:26.867 } 00:09:26.867 ] 00:09:26.867 }' 00:09:26.867 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.867 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.127 [2024-09-30 12:26:38.947828] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.127 "name": "Existed_Raid", 00:09:27.127 "aliases": [ 00:09:27.127 "f2c6b660-0f56-4623-a642-32a74a87d19b" 00:09:27.127 ], 00:09:27.127 "product_name": "Raid Volume", 00:09:27.127 "block_size": 512, 00:09:27.127 "num_blocks": 190464, 00:09:27.127 "uuid": "f2c6b660-0f56-4623-a642-32a74a87d19b", 00:09:27.127 "assigned_rate_limits": { 00:09:27.127 "rw_ios_per_sec": 0, 00:09:27.127 "rw_mbytes_per_sec": 0, 00:09:27.127 "r_mbytes_per_sec": 0, 00:09:27.127 "w_mbytes_per_sec": 0 00:09:27.127 }, 00:09:27.127 "claimed": false, 00:09:27.127 "zoned": false, 00:09:27.127 "supported_io_types": { 00:09:27.127 "read": true, 00:09:27.127 "write": true, 00:09:27.127 "unmap": true, 00:09:27.127 "flush": true, 00:09:27.127 "reset": true, 00:09:27.127 "nvme_admin": false, 00:09:27.127 "nvme_io": false, 00:09:27.127 "nvme_io_md": false, 00:09:27.127 "write_zeroes": true, 00:09:27.127 "zcopy": false, 00:09:27.127 "get_zone_info": false, 00:09:27.127 "zone_management": false, 00:09:27.127 "zone_append": false, 00:09:27.127 "compare": false, 00:09:27.127 "compare_and_write": false, 00:09:27.127 "abort": false, 00:09:27.127 "seek_hole": false, 00:09:27.127 "seek_data": false, 00:09:27.127 "copy": false, 00:09:27.127 "nvme_iov_md": false 00:09:27.127 }, 00:09:27.127 "memory_domains": [ 00:09:27.127 { 00:09:27.127 "dma_device_id": "system", 00:09:27.127 "dma_device_type": 1 00:09:27.127 }, 00:09:27.127 { 00:09:27.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.127 "dma_device_type": 2 00:09:27.127 }, 00:09:27.127 { 00:09:27.127 "dma_device_id": "system", 00:09:27.127 "dma_device_type": 1 00:09:27.127 }, 00:09:27.127 { 00:09:27.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.127 "dma_device_type": 2 00:09:27.127 }, 00:09:27.127 { 00:09:27.127 "dma_device_id": "system", 00:09:27.127 "dma_device_type": 1 00:09:27.127 }, 00:09:27.127 { 00:09:27.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.127 "dma_device_type": 2 00:09:27.127 } 00:09:27.127 ], 00:09:27.127 "driver_specific": { 00:09:27.127 "raid": { 00:09:27.127 "uuid": "f2c6b660-0f56-4623-a642-32a74a87d19b", 00:09:27.127 "strip_size_kb": 64, 00:09:27.127 "state": "online", 00:09:27.127 "raid_level": "raid0", 00:09:27.127 "superblock": true, 00:09:27.127 "num_base_bdevs": 3, 00:09:27.127 "num_base_bdevs_discovered": 3, 00:09:27.127 "num_base_bdevs_operational": 3, 00:09:27.127 "base_bdevs_list": [ 00:09:27.127 { 00:09:27.127 "name": "BaseBdev1", 00:09:27.127 "uuid": "cbeb888a-3687-4e41-a280-a175c6b734b7", 00:09:27.127 "is_configured": true, 00:09:27.127 "data_offset": 2048, 00:09:27.127 "data_size": 63488 00:09:27.127 }, 00:09:27.127 { 00:09:27.127 "name": "BaseBdev2", 00:09:27.127 "uuid": "48ca66f9-5951-4e36-8a2a-b4823e2037de", 00:09:27.127 "is_configured": true, 00:09:27.127 "data_offset": 2048, 00:09:27.127 "data_size": 63488 00:09:27.127 }, 00:09:27.127 { 00:09:27.127 "name": "BaseBdev3", 00:09:27.127 "uuid": "38c520e3-08c5-4bb0-aca3-b00875ad6861", 00:09:27.127 "is_configured": true, 00:09:27.127 "data_offset": 2048, 00:09:27.127 "data_size": 63488 00:09:27.127 } 00:09:27.127 ] 00:09:27.127 } 00:09:27.127 } 00:09:27.127 }' 00:09:27.127 12:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:27.387 BaseBdev2 00:09:27.387 BaseBdev3' 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.387 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.388 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.388 [2024-09-30 12:26:39.227086] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:27.388 [2024-09-30 12:26:39.227117] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.388 [2024-09-30 12:26:39.227174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.647 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.647 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.648 "name": "Existed_Raid", 00:09:27.648 "uuid": "f2c6b660-0f56-4623-a642-32a74a87d19b", 00:09:27.648 "strip_size_kb": 64, 00:09:27.648 "state": "offline", 00:09:27.648 "raid_level": "raid0", 00:09:27.648 "superblock": true, 00:09:27.648 "num_base_bdevs": 3, 00:09:27.648 "num_base_bdevs_discovered": 2, 00:09:27.648 "num_base_bdevs_operational": 2, 00:09:27.648 "base_bdevs_list": [ 00:09:27.648 { 00:09:27.648 "name": null, 00:09:27.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.648 "is_configured": false, 00:09:27.648 "data_offset": 0, 00:09:27.648 "data_size": 63488 00:09:27.648 }, 00:09:27.648 { 00:09:27.648 "name": "BaseBdev2", 00:09:27.648 "uuid": "48ca66f9-5951-4e36-8a2a-b4823e2037de", 00:09:27.648 "is_configured": true, 00:09:27.648 "data_offset": 2048, 00:09:27.648 "data_size": 63488 00:09:27.648 }, 00:09:27.648 { 00:09:27.648 "name": "BaseBdev3", 00:09:27.648 "uuid": "38c520e3-08c5-4bb0-aca3-b00875ad6861", 00:09:27.648 "is_configured": true, 00:09:27.648 "data_offset": 2048, 00:09:27.648 "data_size": 63488 00:09:27.648 } 00:09:27.648 ] 00:09:27.648 }' 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.648 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.907 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.907 [2024-09-30 12:26:39.784619] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.167 12:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.167 [2024-09-30 12:26:39.934481] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:28.167 [2024-09-30 12:26:39.934540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:28.167 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.167 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:28.167 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.167 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.167 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:28.167 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.167 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.167 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.429 BaseBdev2 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.429 [ 00:09:28.429 { 00:09:28.429 "name": "BaseBdev2", 00:09:28.429 "aliases": [ 00:09:28.429 "ac7d84d0-0e41-440d-bba9-4f17f7ac3031" 00:09:28.429 ], 00:09:28.429 "product_name": "Malloc disk", 00:09:28.429 "block_size": 512, 00:09:28.429 "num_blocks": 65536, 00:09:28.429 "uuid": "ac7d84d0-0e41-440d-bba9-4f17f7ac3031", 00:09:28.429 "assigned_rate_limits": { 00:09:28.429 "rw_ios_per_sec": 0, 00:09:28.429 "rw_mbytes_per_sec": 0, 00:09:28.429 "r_mbytes_per_sec": 0, 00:09:28.429 "w_mbytes_per_sec": 0 00:09:28.429 }, 00:09:28.429 "claimed": false, 00:09:28.429 "zoned": false, 00:09:28.429 "supported_io_types": { 00:09:28.429 "read": true, 00:09:28.429 "write": true, 00:09:28.429 "unmap": true, 00:09:28.429 "flush": true, 00:09:28.429 "reset": true, 00:09:28.429 "nvme_admin": false, 00:09:28.429 "nvme_io": false, 00:09:28.429 "nvme_io_md": false, 00:09:28.429 "write_zeroes": true, 00:09:28.429 "zcopy": true, 00:09:28.429 "get_zone_info": false, 00:09:28.429 "zone_management": false, 00:09:28.429 "zone_append": false, 00:09:28.429 "compare": false, 00:09:28.429 "compare_and_write": false, 00:09:28.429 "abort": true, 00:09:28.429 "seek_hole": false, 00:09:28.429 "seek_data": false, 00:09:28.429 "copy": true, 00:09:28.429 "nvme_iov_md": false 00:09:28.429 }, 00:09:28.429 "memory_domains": [ 00:09:28.429 { 00:09:28.429 "dma_device_id": "system", 00:09:28.429 "dma_device_type": 1 00:09:28.429 }, 00:09:28.429 { 00:09:28.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.429 "dma_device_type": 2 00:09:28.429 } 00:09:28.429 ], 00:09:28.429 "driver_specific": {} 00:09:28.429 } 00:09:28.429 ] 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.429 BaseBdev3 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.429 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.429 [ 00:09:28.429 { 00:09:28.429 "name": "BaseBdev3", 00:09:28.429 "aliases": [ 00:09:28.429 "7024f160-ff01-4a6c-a551-6771c06b412a" 00:09:28.429 ], 00:09:28.429 "product_name": "Malloc disk", 00:09:28.429 "block_size": 512, 00:09:28.429 "num_blocks": 65536, 00:09:28.429 "uuid": "7024f160-ff01-4a6c-a551-6771c06b412a", 00:09:28.429 "assigned_rate_limits": { 00:09:28.429 "rw_ios_per_sec": 0, 00:09:28.429 "rw_mbytes_per_sec": 0, 00:09:28.429 "r_mbytes_per_sec": 0, 00:09:28.429 "w_mbytes_per_sec": 0 00:09:28.429 }, 00:09:28.429 "claimed": false, 00:09:28.429 "zoned": false, 00:09:28.429 "supported_io_types": { 00:09:28.429 "read": true, 00:09:28.429 "write": true, 00:09:28.429 "unmap": true, 00:09:28.429 "flush": true, 00:09:28.429 "reset": true, 00:09:28.429 "nvme_admin": false, 00:09:28.429 "nvme_io": false, 00:09:28.429 "nvme_io_md": false, 00:09:28.429 "write_zeroes": true, 00:09:28.429 "zcopy": true, 00:09:28.429 "get_zone_info": false, 00:09:28.429 "zone_management": false, 00:09:28.429 "zone_append": false, 00:09:28.429 "compare": false, 00:09:28.429 "compare_and_write": false, 00:09:28.429 "abort": true, 00:09:28.429 "seek_hole": false, 00:09:28.429 "seek_data": false, 00:09:28.430 "copy": true, 00:09:28.430 "nvme_iov_md": false 00:09:28.430 }, 00:09:28.430 "memory_domains": [ 00:09:28.430 { 00:09:28.430 "dma_device_id": "system", 00:09:28.430 "dma_device_type": 1 00:09:28.430 }, 00:09:28.430 { 00:09:28.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.430 "dma_device_type": 2 00:09:28.430 } 00:09:28.430 ], 00:09:28.430 "driver_specific": {} 00:09:28.430 } 00:09:28.430 ] 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.430 [2024-09-30 12:26:40.243016] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.430 [2024-09-30 12:26:40.243155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.430 [2024-09-30 12:26:40.243203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.430 [2024-09-30 12:26:40.244965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.430 "name": "Existed_Raid", 00:09:28.430 "uuid": "5d4e7832-2a75-4796-9532-b0e6d5752b92", 00:09:28.430 "strip_size_kb": 64, 00:09:28.430 "state": "configuring", 00:09:28.430 "raid_level": "raid0", 00:09:28.430 "superblock": true, 00:09:28.430 "num_base_bdevs": 3, 00:09:28.430 "num_base_bdevs_discovered": 2, 00:09:28.430 "num_base_bdevs_operational": 3, 00:09:28.430 "base_bdevs_list": [ 00:09:28.430 { 00:09:28.430 "name": "BaseBdev1", 00:09:28.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.430 "is_configured": false, 00:09:28.430 "data_offset": 0, 00:09:28.430 "data_size": 0 00:09:28.430 }, 00:09:28.430 { 00:09:28.430 "name": "BaseBdev2", 00:09:28.430 "uuid": "ac7d84d0-0e41-440d-bba9-4f17f7ac3031", 00:09:28.430 "is_configured": true, 00:09:28.430 "data_offset": 2048, 00:09:28.430 "data_size": 63488 00:09:28.430 }, 00:09:28.430 { 00:09:28.430 "name": "BaseBdev3", 00:09:28.430 "uuid": "7024f160-ff01-4a6c-a551-6771c06b412a", 00:09:28.430 "is_configured": true, 00:09:28.430 "data_offset": 2048, 00:09:28.430 "data_size": 63488 00:09:28.430 } 00:09:28.430 ] 00:09:28.430 }' 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.430 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.000 [2024-09-30 12:26:40.710160] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.000 "name": "Existed_Raid", 00:09:29.000 "uuid": "5d4e7832-2a75-4796-9532-b0e6d5752b92", 00:09:29.000 "strip_size_kb": 64, 00:09:29.000 "state": "configuring", 00:09:29.000 "raid_level": "raid0", 00:09:29.000 "superblock": true, 00:09:29.000 "num_base_bdevs": 3, 00:09:29.000 "num_base_bdevs_discovered": 1, 00:09:29.000 "num_base_bdevs_operational": 3, 00:09:29.000 "base_bdevs_list": [ 00:09:29.000 { 00:09:29.000 "name": "BaseBdev1", 00:09:29.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.000 "is_configured": false, 00:09:29.000 "data_offset": 0, 00:09:29.000 "data_size": 0 00:09:29.000 }, 00:09:29.000 { 00:09:29.000 "name": null, 00:09:29.000 "uuid": "ac7d84d0-0e41-440d-bba9-4f17f7ac3031", 00:09:29.000 "is_configured": false, 00:09:29.000 "data_offset": 0, 00:09:29.000 "data_size": 63488 00:09:29.000 }, 00:09:29.000 { 00:09:29.000 "name": "BaseBdev3", 00:09:29.000 "uuid": "7024f160-ff01-4a6c-a551-6771c06b412a", 00:09:29.000 "is_configured": true, 00:09:29.000 "data_offset": 2048, 00:09:29.000 "data_size": 63488 00:09:29.000 } 00:09:29.000 ] 00:09:29.000 }' 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.000 12:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.260 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.260 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:29.260 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.260 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.519 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.519 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:29.519 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.519 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.519 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.519 [2024-09-30 12:26:41.230540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.519 BaseBdev1 00:09:29.519 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.519 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.520 [ 00:09:29.520 { 00:09:29.520 "name": "BaseBdev1", 00:09:29.520 "aliases": [ 00:09:29.520 "b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72" 00:09:29.520 ], 00:09:29.520 "product_name": "Malloc disk", 00:09:29.520 "block_size": 512, 00:09:29.520 "num_blocks": 65536, 00:09:29.520 "uuid": "b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72", 00:09:29.520 "assigned_rate_limits": { 00:09:29.520 "rw_ios_per_sec": 0, 00:09:29.520 "rw_mbytes_per_sec": 0, 00:09:29.520 "r_mbytes_per_sec": 0, 00:09:29.520 "w_mbytes_per_sec": 0 00:09:29.520 }, 00:09:29.520 "claimed": true, 00:09:29.520 "claim_type": "exclusive_write", 00:09:29.520 "zoned": false, 00:09:29.520 "supported_io_types": { 00:09:29.520 "read": true, 00:09:29.520 "write": true, 00:09:29.520 "unmap": true, 00:09:29.520 "flush": true, 00:09:29.520 "reset": true, 00:09:29.520 "nvme_admin": false, 00:09:29.520 "nvme_io": false, 00:09:29.520 "nvme_io_md": false, 00:09:29.520 "write_zeroes": true, 00:09:29.520 "zcopy": true, 00:09:29.520 "get_zone_info": false, 00:09:29.520 "zone_management": false, 00:09:29.520 "zone_append": false, 00:09:29.520 "compare": false, 00:09:29.520 "compare_and_write": false, 00:09:29.520 "abort": true, 00:09:29.520 "seek_hole": false, 00:09:29.520 "seek_data": false, 00:09:29.520 "copy": true, 00:09:29.520 "nvme_iov_md": false 00:09:29.520 }, 00:09:29.520 "memory_domains": [ 00:09:29.520 { 00:09:29.520 "dma_device_id": "system", 00:09:29.520 "dma_device_type": 1 00:09:29.520 }, 00:09:29.520 { 00:09:29.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.520 "dma_device_type": 2 00:09:29.520 } 00:09:29.520 ], 00:09:29.520 "driver_specific": {} 00:09:29.520 } 00:09:29.520 ] 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.520 "name": "Existed_Raid", 00:09:29.520 "uuid": "5d4e7832-2a75-4796-9532-b0e6d5752b92", 00:09:29.520 "strip_size_kb": 64, 00:09:29.520 "state": "configuring", 00:09:29.520 "raid_level": "raid0", 00:09:29.520 "superblock": true, 00:09:29.520 "num_base_bdevs": 3, 00:09:29.520 "num_base_bdevs_discovered": 2, 00:09:29.520 "num_base_bdevs_operational": 3, 00:09:29.520 "base_bdevs_list": [ 00:09:29.520 { 00:09:29.520 "name": "BaseBdev1", 00:09:29.520 "uuid": "b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72", 00:09:29.520 "is_configured": true, 00:09:29.520 "data_offset": 2048, 00:09:29.520 "data_size": 63488 00:09:29.520 }, 00:09:29.520 { 00:09:29.520 "name": null, 00:09:29.520 "uuid": "ac7d84d0-0e41-440d-bba9-4f17f7ac3031", 00:09:29.520 "is_configured": false, 00:09:29.520 "data_offset": 0, 00:09:29.520 "data_size": 63488 00:09:29.520 }, 00:09:29.520 { 00:09:29.520 "name": "BaseBdev3", 00:09:29.520 "uuid": "7024f160-ff01-4a6c-a551-6771c06b412a", 00:09:29.520 "is_configured": true, 00:09:29.520 "data_offset": 2048, 00:09:29.520 "data_size": 63488 00:09:29.520 } 00:09:29.520 ] 00:09:29.520 }' 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.520 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.089 [2024-09-30 12:26:41.773664] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.089 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.089 "name": "Existed_Raid", 00:09:30.089 "uuid": "5d4e7832-2a75-4796-9532-b0e6d5752b92", 00:09:30.089 "strip_size_kb": 64, 00:09:30.089 "state": "configuring", 00:09:30.089 "raid_level": "raid0", 00:09:30.089 "superblock": true, 00:09:30.089 "num_base_bdevs": 3, 00:09:30.089 "num_base_bdevs_discovered": 1, 00:09:30.089 "num_base_bdevs_operational": 3, 00:09:30.089 "base_bdevs_list": [ 00:09:30.089 { 00:09:30.089 "name": "BaseBdev1", 00:09:30.089 "uuid": "b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72", 00:09:30.089 "is_configured": true, 00:09:30.089 "data_offset": 2048, 00:09:30.089 "data_size": 63488 00:09:30.089 }, 00:09:30.089 { 00:09:30.089 "name": null, 00:09:30.089 "uuid": "ac7d84d0-0e41-440d-bba9-4f17f7ac3031", 00:09:30.089 "is_configured": false, 00:09:30.089 "data_offset": 0, 00:09:30.089 "data_size": 63488 00:09:30.089 }, 00:09:30.089 { 00:09:30.089 "name": null, 00:09:30.089 "uuid": "7024f160-ff01-4a6c-a551-6771c06b412a", 00:09:30.089 "is_configured": false, 00:09:30.089 "data_offset": 0, 00:09:30.089 "data_size": 63488 00:09:30.089 } 00:09:30.089 ] 00:09:30.090 }' 00:09:30.090 12:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.090 12:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.349 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.349 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.349 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.349 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:30.349 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.608 [2024-09-30 12:26:42.256872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.608 "name": "Existed_Raid", 00:09:30.608 "uuid": "5d4e7832-2a75-4796-9532-b0e6d5752b92", 00:09:30.608 "strip_size_kb": 64, 00:09:30.608 "state": "configuring", 00:09:30.608 "raid_level": "raid0", 00:09:30.608 "superblock": true, 00:09:30.608 "num_base_bdevs": 3, 00:09:30.608 "num_base_bdevs_discovered": 2, 00:09:30.608 "num_base_bdevs_operational": 3, 00:09:30.608 "base_bdevs_list": [ 00:09:30.608 { 00:09:30.608 "name": "BaseBdev1", 00:09:30.608 "uuid": "b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72", 00:09:30.608 "is_configured": true, 00:09:30.608 "data_offset": 2048, 00:09:30.608 "data_size": 63488 00:09:30.608 }, 00:09:30.608 { 00:09:30.608 "name": null, 00:09:30.608 "uuid": "ac7d84d0-0e41-440d-bba9-4f17f7ac3031", 00:09:30.608 "is_configured": false, 00:09:30.608 "data_offset": 0, 00:09:30.608 "data_size": 63488 00:09:30.608 }, 00:09:30.608 { 00:09:30.608 "name": "BaseBdev3", 00:09:30.608 "uuid": "7024f160-ff01-4a6c-a551-6771c06b412a", 00:09:30.608 "is_configured": true, 00:09:30.608 "data_offset": 2048, 00:09:30.608 "data_size": 63488 00:09:30.608 } 00:09:30.608 ] 00:09:30.608 }' 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.608 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.867 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:30.867 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.867 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.867 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.126 [2024-09-30 12:26:42.779999] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.126 "name": "Existed_Raid", 00:09:31.126 "uuid": "5d4e7832-2a75-4796-9532-b0e6d5752b92", 00:09:31.126 "strip_size_kb": 64, 00:09:31.126 "state": "configuring", 00:09:31.126 "raid_level": "raid0", 00:09:31.126 "superblock": true, 00:09:31.126 "num_base_bdevs": 3, 00:09:31.126 "num_base_bdevs_discovered": 1, 00:09:31.126 "num_base_bdevs_operational": 3, 00:09:31.126 "base_bdevs_list": [ 00:09:31.126 { 00:09:31.126 "name": null, 00:09:31.126 "uuid": "b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72", 00:09:31.126 "is_configured": false, 00:09:31.126 "data_offset": 0, 00:09:31.126 "data_size": 63488 00:09:31.126 }, 00:09:31.126 { 00:09:31.126 "name": null, 00:09:31.126 "uuid": "ac7d84d0-0e41-440d-bba9-4f17f7ac3031", 00:09:31.126 "is_configured": false, 00:09:31.126 "data_offset": 0, 00:09:31.126 "data_size": 63488 00:09:31.126 }, 00:09:31.126 { 00:09:31.126 "name": "BaseBdev3", 00:09:31.126 "uuid": "7024f160-ff01-4a6c-a551-6771c06b412a", 00:09:31.126 "is_configured": true, 00:09:31.126 "data_offset": 2048, 00:09:31.126 "data_size": 63488 00:09:31.126 } 00:09:31.126 ] 00:09:31.126 }' 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.126 12:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.693 [2024-09-30 12:26:43.371638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:31.693 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.694 "name": "Existed_Raid", 00:09:31.694 "uuid": "5d4e7832-2a75-4796-9532-b0e6d5752b92", 00:09:31.694 "strip_size_kb": 64, 00:09:31.694 "state": "configuring", 00:09:31.694 "raid_level": "raid0", 00:09:31.694 "superblock": true, 00:09:31.694 "num_base_bdevs": 3, 00:09:31.694 "num_base_bdevs_discovered": 2, 00:09:31.694 "num_base_bdevs_operational": 3, 00:09:31.694 "base_bdevs_list": [ 00:09:31.694 { 00:09:31.694 "name": null, 00:09:31.694 "uuid": "b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72", 00:09:31.694 "is_configured": false, 00:09:31.694 "data_offset": 0, 00:09:31.694 "data_size": 63488 00:09:31.694 }, 00:09:31.694 { 00:09:31.694 "name": "BaseBdev2", 00:09:31.694 "uuid": "ac7d84d0-0e41-440d-bba9-4f17f7ac3031", 00:09:31.694 "is_configured": true, 00:09:31.694 "data_offset": 2048, 00:09:31.694 "data_size": 63488 00:09:31.694 }, 00:09:31.694 { 00:09:31.694 "name": "BaseBdev3", 00:09:31.694 "uuid": "7024f160-ff01-4a6c-a551-6771c06b412a", 00:09:31.694 "is_configured": true, 00:09:31.694 "data_offset": 2048, 00:09:31.694 "data_size": 63488 00:09:31.694 } 00:09:31.694 ] 00:09:31.694 }' 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.694 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.956 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.217 [2024-09-30 12:26:43.870977] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:32.217 [2024-09-30 12:26:43.871221] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:32.217 [2024-09-30 12:26:43.871239] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:32.217 [2024-09-30 12:26:43.871534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:32.217 [2024-09-30 12:26:43.871686] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:32.217 [2024-09-30 12:26:43.871705] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:32.217 NewBaseBdev 00:09:32.217 [2024-09-30 12:26:43.871883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.217 [ 00:09:32.217 { 00:09:32.217 "name": "NewBaseBdev", 00:09:32.217 "aliases": [ 00:09:32.217 "b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72" 00:09:32.217 ], 00:09:32.217 "product_name": "Malloc disk", 00:09:32.217 "block_size": 512, 00:09:32.217 "num_blocks": 65536, 00:09:32.217 "uuid": "b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72", 00:09:32.217 "assigned_rate_limits": { 00:09:32.217 "rw_ios_per_sec": 0, 00:09:32.217 "rw_mbytes_per_sec": 0, 00:09:32.217 "r_mbytes_per_sec": 0, 00:09:32.217 "w_mbytes_per_sec": 0 00:09:32.217 }, 00:09:32.217 "claimed": true, 00:09:32.217 "claim_type": "exclusive_write", 00:09:32.217 "zoned": false, 00:09:32.217 "supported_io_types": { 00:09:32.217 "read": true, 00:09:32.217 "write": true, 00:09:32.217 "unmap": true, 00:09:32.217 "flush": true, 00:09:32.217 "reset": true, 00:09:32.217 "nvme_admin": false, 00:09:32.217 "nvme_io": false, 00:09:32.217 "nvme_io_md": false, 00:09:32.217 "write_zeroes": true, 00:09:32.217 "zcopy": true, 00:09:32.217 "get_zone_info": false, 00:09:32.217 "zone_management": false, 00:09:32.217 "zone_append": false, 00:09:32.217 "compare": false, 00:09:32.217 "compare_and_write": false, 00:09:32.217 "abort": true, 00:09:32.217 "seek_hole": false, 00:09:32.217 "seek_data": false, 00:09:32.217 "copy": true, 00:09:32.217 "nvme_iov_md": false 00:09:32.217 }, 00:09:32.217 "memory_domains": [ 00:09:32.217 { 00:09:32.217 "dma_device_id": "system", 00:09:32.217 "dma_device_type": 1 00:09:32.217 }, 00:09:32.217 { 00:09:32.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.217 "dma_device_type": 2 00:09:32.217 } 00:09:32.217 ], 00:09:32.217 "driver_specific": {} 00:09:32.217 } 00:09:32.217 ] 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.217 "name": "Existed_Raid", 00:09:32.217 "uuid": "5d4e7832-2a75-4796-9532-b0e6d5752b92", 00:09:32.217 "strip_size_kb": 64, 00:09:32.217 "state": "online", 00:09:32.217 "raid_level": "raid0", 00:09:32.217 "superblock": true, 00:09:32.217 "num_base_bdevs": 3, 00:09:32.217 "num_base_bdevs_discovered": 3, 00:09:32.217 "num_base_bdevs_operational": 3, 00:09:32.217 "base_bdevs_list": [ 00:09:32.217 { 00:09:32.217 "name": "NewBaseBdev", 00:09:32.217 "uuid": "b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72", 00:09:32.217 "is_configured": true, 00:09:32.217 "data_offset": 2048, 00:09:32.217 "data_size": 63488 00:09:32.217 }, 00:09:32.217 { 00:09:32.217 "name": "BaseBdev2", 00:09:32.217 "uuid": "ac7d84d0-0e41-440d-bba9-4f17f7ac3031", 00:09:32.217 "is_configured": true, 00:09:32.217 "data_offset": 2048, 00:09:32.217 "data_size": 63488 00:09:32.217 }, 00:09:32.217 { 00:09:32.217 "name": "BaseBdev3", 00:09:32.217 "uuid": "7024f160-ff01-4a6c-a551-6771c06b412a", 00:09:32.217 "is_configured": true, 00:09:32.217 "data_offset": 2048, 00:09:32.217 "data_size": 63488 00:09:32.217 } 00:09:32.217 ] 00:09:32.217 }' 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.217 12:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.477 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:32.477 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:32.477 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.477 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.477 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.477 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.477 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:32.477 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.477 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.477 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.477 [2024-09-30 12:26:44.346568] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.477 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.737 "name": "Existed_Raid", 00:09:32.737 "aliases": [ 00:09:32.737 "5d4e7832-2a75-4796-9532-b0e6d5752b92" 00:09:32.737 ], 00:09:32.737 "product_name": "Raid Volume", 00:09:32.737 "block_size": 512, 00:09:32.737 "num_blocks": 190464, 00:09:32.737 "uuid": "5d4e7832-2a75-4796-9532-b0e6d5752b92", 00:09:32.737 "assigned_rate_limits": { 00:09:32.737 "rw_ios_per_sec": 0, 00:09:32.737 "rw_mbytes_per_sec": 0, 00:09:32.737 "r_mbytes_per_sec": 0, 00:09:32.737 "w_mbytes_per_sec": 0 00:09:32.737 }, 00:09:32.737 "claimed": false, 00:09:32.737 "zoned": false, 00:09:32.737 "supported_io_types": { 00:09:32.737 "read": true, 00:09:32.737 "write": true, 00:09:32.737 "unmap": true, 00:09:32.737 "flush": true, 00:09:32.737 "reset": true, 00:09:32.737 "nvme_admin": false, 00:09:32.737 "nvme_io": false, 00:09:32.737 "nvme_io_md": false, 00:09:32.737 "write_zeroes": true, 00:09:32.737 "zcopy": false, 00:09:32.737 "get_zone_info": false, 00:09:32.737 "zone_management": false, 00:09:32.737 "zone_append": false, 00:09:32.737 "compare": false, 00:09:32.737 "compare_and_write": false, 00:09:32.737 "abort": false, 00:09:32.737 "seek_hole": false, 00:09:32.737 "seek_data": false, 00:09:32.737 "copy": false, 00:09:32.737 "nvme_iov_md": false 00:09:32.737 }, 00:09:32.737 "memory_domains": [ 00:09:32.737 { 00:09:32.737 "dma_device_id": "system", 00:09:32.737 "dma_device_type": 1 00:09:32.737 }, 00:09:32.737 { 00:09:32.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.737 "dma_device_type": 2 00:09:32.737 }, 00:09:32.737 { 00:09:32.737 "dma_device_id": "system", 00:09:32.737 "dma_device_type": 1 00:09:32.737 }, 00:09:32.737 { 00:09:32.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.737 "dma_device_type": 2 00:09:32.737 }, 00:09:32.737 { 00:09:32.737 "dma_device_id": "system", 00:09:32.737 "dma_device_type": 1 00:09:32.737 }, 00:09:32.737 { 00:09:32.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.737 "dma_device_type": 2 00:09:32.737 } 00:09:32.737 ], 00:09:32.737 "driver_specific": { 00:09:32.737 "raid": { 00:09:32.737 "uuid": "5d4e7832-2a75-4796-9532-b0e6d5752b92", 00:09:32.737 "strip_size_kb": 64, 00:09:32.737 "state": "online", 00:09:32.737 "raid_level": "raid0", 00:09:32.737 "superblock": true, 00:09:32.737 "num_base_bdevs": 3, 00:09:32.737 "num_base_bdevs_discovered": 3, 00:09:32.737 "num_base_bdevs_operational": 3, 00:09:32.737 "base_bdevs_list": [ 00:09:32.737 { 00:09:32.737 "name": "NewBaseBdev", 00:09:32.737 "uuid": "b1ba2269-6f59-4dd4-8cfa-3033e3bd3f72", 00:09:32.737 "is_configured": true, 00:09:32.737 "data_offset": 2048, 00:09:32.737 "data_size": 63488 00:09:32.737 }, 00:09:32.737 { 00:09:32.737 "name": "BaseBdev2", 00:09:32.737 "uuid": "ac7d84d0-0e41-440d-bba9-4f17f7ac3031", 00:09:32.737 "is_configured": true, 00:09:32.737 "data_offset": 2048, 00:09:32.737 "data_size": 63488 00:09:32.737 }, 00:09:32.737 { 00:09:32.737 "name": "BaseBdev3", 00:09:32.737 "uuid": "7024f160-ff01-4a6c-a551-6771c06b412a", 00:09:32.737 "is_configured": true, 00:09:32.737 "data_offset": 2048, 00:09:32.737 "data_size": 63488 00:09:32.737 } 00:09:32.737 ] 00:09:32.737 } 00:09:32.737 } 00:09:32.737 }' 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:32.737 BaseBdev2 00:09:32.737 BaseBdev3' 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.737 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.738 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.738 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.738 [2024-09-30 12:26:44.593845] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.738 [2024-09-30 12:26:44.593879] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.738 [2024-09-30 12:26:44.593961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.738 [2024-09-30 12:26:44.594020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.738 [2024-09-30 12:26:44.594036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:32.738 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.738 12:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64336 00:09:32.738 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64336 ']' 00:09:32.738 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64336 00:09:32.738 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:32.738 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.738 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64336 00:09:32.997 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.997 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.997 killing process with pid 64336 00:09:32.997 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64336' 00:09:32.997 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64336 00:09:32.997 [2024-09-30 12:26:44.643104] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.997 12:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64336 00:09:33.255 [2024-09-30 12:26:44.926192] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:34.635 12:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:34.635 00:09:34.635 real 0m10.591s 00:09:34.635 user 0m16.758s 00:09:34.635 sys 0m1.855s 00:09:34.635 12:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.635 12:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.635 ************************************ 00:09:34.635 END TEST raid_state_function_test_sb 00:09:34.635 ************************************ 00:09:34.635 12:26:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:34.635 12:26:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:34.635 12:26:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.635 12:26:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.635 ************************************ 00:09:34.635 START TEST raid_superblock_test 00:09:34.635 ************************************ 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64961 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64961 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 64961 ']' 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.635 12:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.635 [2024-09-30 12:26:46.313055] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:34.635 [2024-09-30 12:26:46.313197] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64961 ] 00:09:34.635 [2024-09-30 12:26:46.474109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.894 [2024-09-30 12:26:46.677328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.154 [2024-09-30 12:26:46.864052] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.154 [2024-09-30 12:26:46.864113] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.414 malloc1 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.414 [2024-09-30 12:26:47.172230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:35.414 [2024-09-30 12:26:47.172302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.414 [2024-09-30 12:26:47.172328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:35.414 [2024-09-30 12:26:47.172342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.414 [2024-09-30 12:26:47.174435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.414 [2024-09-30 12:26:47.174475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:35.414 pt1 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.414 malloc2 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.414 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.415 [2024-09-30 12:26:47.256916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.415 [2024-09-30 12:26:47.256973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.415 [2024-09-30 12:26:47.257013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:35.415 [2024-09-30 12:26:47.257024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.415 [2024-09-30 12:26:47.259044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.415 [2024-09-30 12:26:47.259086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.415 pt2 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.415 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.415 malloc3 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.675 [2024-09-30 12:26:47.316744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:35.675 [2024-09-30 12:26:47.316873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.675 [2024-09-30 12:26:47.316933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:35.675 [2024-09-30 12:26:47.316969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.675 [2024-09-30 12:26:47.319005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.675 [2024-09-30 12:26:47.319099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:35.675 pt3 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.675 [2024-09-30 12:26:47.328815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:35.675 [2024-09-30 12:26:47.330691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.675 [2024-09-30 12:26:47.330838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:35.675 [2024-09-30 12:26:47.331048] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:35.675 [2024-09-30 12:26:47.331106] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:35.675 [2024-09-30 12:26:47.331375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:35.675 [2024-09-30 12:26:47.331598] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:35.675 [2024-09-30 12:26:47.331650] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:35.675 [2024-09-30 12:26:47.331871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.675 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.675 "name": "raid_bdev1", 00:09:35.675 "uuid": "3c78300a-f52a-418a-8325-efab715538a1", 00:09:35.675 "strip_size_kb": 64, 00:09:35.675 "state": "online", 00:09:35.675 "raid_level": "raid0", 00:09:35.675 "superblock": true, 00:09:35.676 "num_base_bdevs": 3, 00:09:35.676 "num_base_bdevs_discovered": 3, 00:09:35.676 "num_base_bdevs_operational": 3, 00:09:35.676 "base_bdevs_list": [ 00:09:35.676 { 00:09:35.676 "name": "pt1", 00:09:35.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.676 "is_configured": true, 00:09:35.676 "data_offset": 2048, 00:09:35.676 "data_size": 63488 00:09:35.676 }, 00:09:35.676 { 00:09:35.676 "name": "pt2", 00:09:35.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.676 "is_configured": true, 00:09:35.676 "data_offset": 2048, 00:09:35.676 "data_size": 63488 00:09:35.676 }, 00:09:35.676 { 00:09:35.676 "name": "pt3", 00:09:35.676 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.676 "is_configured": true, 00:09:35.676 "data_offset": 2048, 00:09:35.676 "data_size": 63488 00:09:35.676 } 00:09:35.676 ] 00:09:35.676 }' 00:09:35.676 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.676 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.935 [2024-09-30 12:26:47.780283] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.935 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.935 "name": "raid_bdev1", 00:09:35.935 "aliases": [ 00:09:35.935 "3c78300a-f52a-418a-8325-efab715538a1" 00:09:35.935 ], 00:09:35.935 "product_name": "Raid Volume", 00:09:35.935 "block_size": 512, 00:09:35.935 "num_blocks": 190464, 00:09:35.935 "uuid": "3c78300a-f52a-418a-8325-efab715538a1", 00:09:35.936 "assigned_rate_limits": { 00:09:35.936 "rw_ios_per_sec": 0, 00:09:35.936 "rw_mbytes_per_sec": 0, 00:09:35.936 "r_mbytes_per_sec": 0, 00:09:35.936 "w_mbytes_per_sec": 0 00:09:35.936 }, 00:09:35.936 "claimed": false, 00:09:35.936 "zoned": false, 00:09:35.936 "supported_io_types": { 00:09:35.936 "read": true, 00:09:35.936 "write": true, 00:09:35.936 "unmap": true, 00:09:35.936 "flush": true, 00:09:35.936 "reset": true, 00:09:35.936 "nvme_admin": false, 00:09:35.936 "nvme_io": false, 00:09:35.936 "nvme_io_md": false, 00:09:35.936 "write_zeroes": true, 00:09:35.936 "zcopy": false, 00:09:35.936 "get_zone_info": false, 00:09:35.936 "zone_management": false, 00:09:35.936 "zone_append": false, 00:09:35.936 "compare": false, 00:09:35.936 "compare_and_write": false, 00:09:35.936 "abort": false, 00:09:35.936 "seek_hole": false, 00:09:35.936 "seek_data": false, 00:09:35.936 "copy": false, 00:09:35.936 "nvme_iov_md": false 00:09:35.936 }, 00:09:35.936 "memory_domains": [ 00:09:35.936 { 00:09:35.936 "dma_device_id": "system", 00:09:35.936 "dma_device_type": 1 00:09:35.936 }, 00:09:35.936 { 00:09:35.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.936 "dma_device_type": 2 00:09:35.936 }, 00:09:35.936 { 00:09:35.936 "dma_device_id": "system", 00:09:35.936 "dma_device_type": 1 00:09:35.936 }, 00:09:35.936 { 00:09:35.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.936 "dma_device_type": 2 00:09:35.936 }, 00:09:35.936 { 00:09:35.936 "dma_device_id": "system", 00:09:35.936 "dma_device_type": 1 00:09:35.936 }, 00:09:35.936 { 00:09:35.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.936 "dma_device_type": 2 00:09:35.936 } 00:09:35.936 ], 00:09:35.936 "driver_specific": { 00:09:35.936 "raid": { 00:09:35.936 "uuid": "3c78300a-f52a-418a-8325-efab715538a1", 00:09:35.936 "strip_size_kb": 64, 00:09:35.936 "state": "online", 00:09:35.936 "raid_level": "raid0", 00:09:35.936 "superblock": true, 00:09:35.936 "num_base_bdevs": 3, 00:09:35.936 "num_base_bdevs_discovered": 3, 00:09:35.936 "num_base_bdevs_operational": 3, 00:09:35.936 "base_bdevs_list": [ 00:09:35.936 { 00:09:35.936 "name": "pt1", 00:09:35.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.936 "is_configured": true, 00:09:35.936 "data_offset": 2048, 00:09:35.936 "data_size": 63488 00:09:35.936 }, 00:09:35.936 { 00:09:35.936 "name": "pt2", 00:09:35.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.936 "is_configured": true, 00:09:35.936 "data_offset": 2048, 00:09:35.936 "data_size": 63488 00:09:35.936 }, 00:09:35.936 { 00:09:35.936 "name": "pt3", 00:09:35.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.936 "is_configured": true, 00:09:35.936 "data_offset": 2048, 00:09:35.936 "data_size": 63488 00:09:35.936 } 00:09:35.936 ] 00:09:35.936 } 00:09:35.936 } 00:09:35.936 }' 00:09:35.936 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:36.196 pt2 00:09:36.196 pt3' 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.196 12:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.196 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.196 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.196 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.196 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.196 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:36.196 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.196 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.196 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.196 [2024-09-30 12:26:48.059810] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.196 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3c78300a-f52a-418a-8325-efab715538a1 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3c78300a-f52a-418a-8325-efab715538a1 ']' 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.456 [2024-09-30 12:26:48.107432] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.456 [2024-09-30 12:26:48.107512] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.456 [2024-09-30 12:26:48.107614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.456 [2024-09-30 12:26:48.107719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.456 [2024-09-30 12:26:48.107801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.456 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.457 [2024-09-30 12:26:48.251207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:36.457 [2024-09-30 12:26:48.253294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:36.457 [2024-09-30 12:26:48.253353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:36.457 [2024-09-30 12:26:48.253404] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:36.457 [2024-09-30 12:26:48.253457] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:36.457 [2024-09-30 12:26:48.253479] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:36.457 [2024-09-30 12:26:48.253498] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.457 [2024-09-30 12:26:48.253508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:36.457 request: 00:09:36.457 { 00:09:36.457 "name": "raid_bdev1", 00:09:36.457 "raid_level": "raid0", 00:09:36.457 "base_bdevs": [ 00:09:36.457 "malloc1", 00:09:36.457 "malloc2", 00:09:36.457 "malloc3" 00:09:36.457 ], 00:09:36.457 "strip_size_kb": 64, 00:09:36.457 "superblock": false, 00:09:36.457 "method": "bdev_raid_create", 00:09:36.457 "req_id": 1 00:09:36.457 } 00:09:36.457 Got JSON-RPC error response 00:09:36.457 response: 00:09:36.457 { 00:09:36.457 "code": -17, 00:09:36.457 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:36.457 } 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.457 [2024-09-30 12:26:48.303089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:36.457 [2024-09-30 12:26:48.303186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.457 [2024-09-30 12:26:48.303249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:36.457 [2024-09-30 12:26:48.303291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.457 [2024-09-30 12:26:48.305452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.457 [2024-09-30 12:26:48.305532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:36.457 [2024-09-30 12:26:48.305651] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:36.457 [2024-09-30 12:26:48.305734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:36.457 pt1 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.457 "name": "raid_bdev1", 00:09:36.457 "uuid": "3c78300a-f52a-418a-8325-efab715538a1", 00:09:36.457 "strip_size_kb": 64, 00:09:36.457 "state": "configuring", 00:09:36.457 "raid_level": "raid0", 00:09:36.457 "superblock": true, 00:09:36.457 "num_base_bdevs": 3, 00:09:36.457 "num_base_bdevs_discovered": 1, 00:09:36.457 "num_base_bdevs_operational": 3, 00:09:36.457 "base_bdevs_list": [ 00:09:36.457 { 00:09:36.457 "name": "pt1", 00:09:36.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.457 "is_configured": true, 00:09:36.457 "data_offset": 2048, 00:09:36.457 "data_size": 63488 00:09:36.457 }, 00:09:36.457 { 00:09:36.457 "name": null, 00:09:36.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.457 "is_configured": false, 00:09:36.457 "data_offset": 2048, 00:09:36.457 "data_size": 63488 00:09:36.457 }, 00:09:36.457 { 00:09:36.457 "name": null, 00:09:36.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.457 "is_configured": false, 00:09:36.457 "data_offset": 2048, 00:09:36.457 "data_size": 63488 00:09:36.457 } 00:09:36.457 ] 00:09:36.457 }' 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.457 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.027 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:37.027 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:37.027 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.027 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.027 [2024-09-30 12:26:48.734464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:37.027 [2024-09-30 12:26:48.734542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.027 [2024-09-30 12:26:48.734571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:37.027 [2024-09-30 12:26:48.734584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.027 [2024-09-30 12:26:48.735168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.027 [2024-09-30 12:26:48.735210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:37.027 [2024-09-30 12:26:48.735313] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:37.027 [2024-09-30 12:26:48.735342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.027 pt2 00:09:37.027 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.027 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:37.027 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.027 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.027 [2024-09-30 12:26:48.746462] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:37.027 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.028 "name": "raid_bdev1", 00:09:37.028 "uuid": "3c78300a-f52a-418a-8325-efab715538a1", 00:09:37.028 "strip_size_kb": 64, 00:09:37.028 "state": "configuring", 00:09:37.028 "raid_level": "raid0", 00:09:37.028 "superblock": true, 00:09:37.028 "num_base_bdevs": 3, 00:09:37.028 "num_base_bdevs_discovered": 1, 00:09:37.028 "num_base_bdevs_operational": 3, 00:09:37.028 "base_bdevs_list": [ 00:09:37.028 { 00:09:37.028 "name": "pt1", 00:09:37.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.028 "is_configured": true, 00:09:37.028 "data_offset": 2048, 00:09:37.028 "data_size": 63488 00:09:37.028 }, 00:09:37.028 { 00:09:37.028 "name": null, 00:09:37.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.028 "is_configured": false, 00:09:37.028 "data_offset": 0, 00:09:37.028 "data_size": 63488 00:09:37.028 }, 00:09:37.028 { 00:09:37.028 "name": null, 00:09:37.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.028 "is_configured": false, 00:09:37.028 "data_offset": 2048, 00:09:37.028 "data_size": 63488 00:09:37.028 } 00:09:37.028 ] 00:09:37.028 }' 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.028 12:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.295 [2024-09-30 12:26:49.161709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:37.295 [2024-09-30 12:26:49.161846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.295 [2024-09-30 12:26:49.161887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:37.295 [2024-09-30 12:26:49.161923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.295 [2024-09-30 12:26:49.162430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.295 [2024-09-30 12:26:49.162506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:37.295 [2024-09-30 12:26:49.162628] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:37.295 [2024-09-30 12:26:49.162708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.295 pt2 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.295 [2024-09-30 12:26:49.173696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:37.295 [2024-09-30 12:26:49.173786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.295 [2024-09-30 12:26:49.173803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:37.295 [2024-09-30 12:26:49.173815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.295 [2024-09-30 12:26:49.174180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.295 [2024-09-30 12:26:49.174220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:37.295 [2024-09-30 12:26:49.174290] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:37.295 [2024-09-30 12:26:49.174313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:37.295 [2024-09-30 12:26:49.174441] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:37.295 [2024-09-30 12:26:49.174460] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:37.295 [2024-09-30 12:26:49.174725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:37.295 [2024-09-30 12:26:49.174897] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:37.295 [2024-09-30 12:26:49.174907] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:37.295 [2024-09-30 12:26:49.175063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.295 pt3 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.295 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.554 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.554 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.554 "name": "raid_bdev1", 00:09:37.554 "uuid": "3c78300a-f52a-418a-8325-efab715538a1", 00:09:37.554 "strip_size_kb": 64, 00:09:37.554 "state": "online", 00:09:37.554 "raid_level": "raid0", 00:09:37.554 "superblock": true, 00:09:37.554 "num_base_bdevs": 3, 00:09:37.554 "num_base_bdevs_discovered": 3, 00:09:37.554 "num_base_bdevs_operational": 3, 00:09:37.554 "base_bdevs_list": [ 00:09:37.554 { 00:09:37.554 "name": "pt1", 00:09:37.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.554 "is_configured": true, 00:09:37.554 "data_offset": 2048, 00:09:37.554 "data_size": 63488 00:09:37.554 }, 00:09:37.554 { 00:09:37.554 "name": "pt2", 00:09:37.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.554 "is_configured": true, 00:09:37.554 "data_offset": 2048, 00:09:37.554 "data_size": 63488 00:09:37.554 }, 00:09:37.554 { 00:09:37.554 "name": "pt3", 00:09:37.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.554 "is_configured": true, 00:09:37.554 "data_offset": 2048, 00:09:37.554 "data_size": 63488 00:09:37.554 } 00:09:37.554 ] 00:09:37.554 }' 00:09:37.554 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.554 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.814 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:37.814 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:37.814 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.814 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.815 [2024-09-30 12:26:49.593274] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.815 "name": "raid_bdev1", 00:09:37.815 "aliases": [ 00:09:37.815 "3c78300a-f52a-418a-8325-efab715538a1" 00:09:37.815 ], 00:09:37.815 "product_name": "Raid Volume", 00:09:37.815 "block_size": 512, 00:09:37.815 "num_blocks": 190464, 00:09:37.815 "uuid": "3c78300a-f52a-418a-8325-efab715538a1", 00:09:37.815 "assigned_rate_limits": { 00:09:37.815 "rw_ios_per_sec": 0, 00:09:37.815 "rw_mbytes_per_sec": 0, 00:09:37.815 "r_mbytes_per_sec": 0, 00:09:37.815 "w_mbytes_per_sec": 0 00:09:37.815 }, 00:09:37.815 "claimed": false, 00:09:37.815 "zoned": false, 00:09:37.815 "supported_io_types": { 00:09:37.815 "read": true, 00:09:37.815 "write": true, 00:09:37.815 "unmap": true, 00:09:37.815 "flush": true, 00:09:37.815 "reset": true, 00:09:37.815 "nvme_admin": false, 00:09:37.815 "nvme_io": false, 00:09:37.815 "nvme_io_md": false, 00:09:37.815 "write_zeroes": true, 00:09:37.815 "zcopy": false, 00:09:37.815 "get_zone_info": false, 00:09:37.815 "zone_management": false, 00:09:37.815 "zone_append": false, 00:09:37.815 "compare": false, 00:09:37.815 "compare_and_write": false, 00:09:37.815 "abort": false, 00:09:37.815 "seek_hole": false, 00:09:37.815 "seek_data": false, 00:09:37.815 "copy": false, 00:09:37.815 "nvme_iov_md": false 00:09:37.815 }, 00:09:37.815 "memory_domains": [ 00:09:37.815 { 00:09:37.815 "dma_device_id": "system", 00:09:37.815 "dma_device_type": 1 00:09:37.815 }, 00:09:37.815 { 00:09:37.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.815 "dma_device_type": 2 00:09:37.815 }, 00:09:37.815 { 00:09:37.815 "dma_device_id": "system", 00:09:37.815 "dma_device_type": 1 00:09:37.815 }, 00:09:37.815 { 00:09:37.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.815 "dma_device_type": 2 00:09:37.815 }, 00:09:37.815 { 00:09:37.815 "dma_device_id": "system", 00:09:37.815 "dma_device_type": 1 00:09:37.815 }, 00:09:37.815 { 00:09:37.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.815 "dma_device_type": 2 00:09:37.815 } 00:09:37.815 ], 00:09:37.815 "driver_specific": { 00:09:37.815 "raid": { 00:09:37.815 "uuid": "3c78300a-f52a-418a-8325-efab715538a1", 00:09:37.815 "strip_size_kb": 64, 00:09:37.815 "state": "online", 00:09:37.815 "raid_level": "raid0", 00:09:37.815 "superblock": true, 00:09:37.815 "num_base_bdevs": 3, 00:09:37.815 "num_base_bdevs_discovered": 3, 00:09:37.815 "num_base_bdevs_operational": 3, 00:09:37.815 "base_bdevs_list": [ 00:09:37.815 { 00:09:37.815 "name": "pt1", 00:09:37.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.815 "is_configured": true, 00:09:37.815 "data_offset": 2048, 00:09:37.815 "data_size": 63488 00:09:37.815 }, 00:09:37.815 { 00:09:37.815 "name": "pt2", 00:09:37.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.815 "is_configured": true, 00:09:37.815 "data_offset": 2048, 00:09:37.815 "data_size": 63488 00:09:37.815 }, 00:09:37.815 { 00:09:37.815 "name": "pt3", 00:09:37.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.815 "is_configured": true, 00:09:37.815 "data_offset": 2048, 00:09:37.815 "data_size": 63488 00:09:37.815 } 00:09:37.815 ] 00:09:37.815 } 00:09:37.815 } 00:09:37.815 }' 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:37.815 pt2 00:09:37.815 pt3' 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.815 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.075 [2024-09-30 12:26:49.828824] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3c78300a-f52a-418a-8325-efab715538a1 '!=' 3c78300a-f52a-418a-8325-efab715538a1 ']' 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64961 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 64961 ']' 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 64961 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64961 00:09:38.075 killing process with pid 64961 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64961' 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 64961 00:09:38.075 [2024-09-30 12:26:49.894022] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.075 12:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 64961 00:09:38.075 [2024-09-30 12:26:49.894111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.076 [2024-09-30 12:26:49.894183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.076 [2024-09-30 12:26:49.894205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:38.335 [2024-09-30 12:26:50.186976] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.716 12:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:39.716 00:09:39.716 real 0m5.201s 00:09:39.716 user 0m7.322s 00:09:39.716 sys 0m0.872s 00:09:39.716 12:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.716 12:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.716 ************************************ 00:09:39.716 END TEST raid_superblock_test 00:09:39.716 ************************************ 00:09:39.716 12:26:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:39.716 12:26:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:39.716 12:26:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.716 12:26:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.716 ************************************ 00:09:39.716 START TEST raid_read_error_test 00:09:39.716 ************************************ 00:09:39.716 12:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UOPhDUFUMx 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65210 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65210 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65210 ']' 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.717 12:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.717 [2024-09-30 12:26:51.594171] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:39.717 [2024-09-30 12:26:51.594320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65210 ] 00:09:39.977 [2024-09-30 12:26:51.756666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.237 [2024-09-30 12:26:51.959758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.497 [2024-09-30 12:26:52.147162] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.497 [2024-09-30 12:26:52.147316] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.756 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.757 BaseBdev1_malloc 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.757 true 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.757 [2024-09-30 12:26:52.491098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:40.757 [2024-09-30 12:26:52.491159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.757 [2024-09-30 12:26:52.491177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:40.757 [2024-09-30 12:26:52.491190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.757 [2024-09-30 12:26:52.493343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.757 [2024-09-30 12:26:52.493387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:40.757 BaseBdev1 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.757 BaseBdev2_malloc 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.757 true 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.757 [2024-09-30 12:26:52.591484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:40.757 [2024-09-30 12:26:52.591541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.757 [2024-09-30 12:26:52.591576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:40.757 [2024-09-30 12:26:52.591588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.757 [2024-09-30 12:26:52.593620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.757 [2024-09-30 12:26:52.593725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:40.757 BaseBdev2 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.757 BaseBdev3_malloc 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.757 true 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.757 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.018 [2024-09-30 12:26:52.653653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:41.018 [2024-09-30 12:26:52.653708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.018 [2024-09-30 12:26:52.653727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:41.018 [2024-09-30 12:26:52.653756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.018 [2024-09-30 12:26:52.655833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.018 [2024-09-30 12:26:52.655938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:41.018 BaseBdev3 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.018 [2024-09-30 12:26:52.665715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.018 [2024-09-30 12:26:52.667537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.018 [2024-09-30 12:26:52.667621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.018 [2024-09-30 12:26:52.667854] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:41.018 [2024-09-30 12:26:52.667869] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:41.018 [2024-09-30 12:26:52.668119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:41.018 [2024-09-30 12:26:52.668279] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:41.018 [2024-09-30 12:26:52.668292] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:41.018 [2024-09-30 12:26:52.668456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.018 "name": "raid_bdev1", 00:09:41.018 "uuid": "601c9b7b-3df1-4387-8cbd-ac531624992a", 00:09:41.018 "strip_size_kb": 64, 00:09:41.018 "state": "online", 00:09:41.018 "raid_level": "raid0", 00:09:41.018 "superblock": true, 00:09:41.018 "num_base_bdevs": 3, 00:09:41.018 "num_base_bdevs_discovered": 3, 00:09:41.018 "num_base_bdevs_operational": 3, 00:09:41.018 "base_bdevs_list": [ 00:09:41.018 { 00:09:41.018 "name": "BaseBdev1", 00:09:41.018 "uuid": "d37c8b31-b2b1-5be4-8620-990b2f4ffa19", 00:09:41.018 "is_configured": true, 00:09:41.018 "data_offset": 2048, 00:09:41.018 "data_size": 63488 00:09:41.018 }, 00:09:41.018 { 00:09:41.018 "name": "BaseBdev2", 00:09:41.018 "uuid": "060b6090-1fc7-5630-b74a-d5f1e8c4e88a", 00:09:41.018 "is_configured": true, 00:09:41.018 "data_offset": 2048, 00:09:41.018 "data_size": 63488 00:09:41.018 }, 00:09:41.018 { 00:09:41.018 "name": "BaseBdev3", 00:09:41.018 "uuid": "78a55f81-99e7-50eb-ba2e-a12501e4dee5", 00:09:41.018 "is_configured": true, 00:09:41.018 "data_offset": 2048, 00:09:41.018 "data_size": 63488 00:09:41.018 } 00:09:41.018 ] 00:09:41.018 }' 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.018 12:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.278 12:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:41.278 12:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:41.278 [2024-09-30 12:26:53.166119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.219 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.479 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.480 "name": "raid_bdev1", 00:09:42.480 "uuid": "601c9b7b-3df1-4387-8cbd-ac531624992a", 00:09:42.480 "strip_size_kb": 64, 00:09:42.480 "state": "online", 00:09:42.480 "raid_level": "raid0", 00:09:42.480 "superblock": true, 00:09:42.480 "num_base_bdevs": 3, 00:09:42.480 "num_base_bdevs_discovered": 3, 00:09:42.480 "num_base_bdevs_operational": 3, 00:09:42.480 "base_bdevs_list": [ 00:09:42.480 { 00:09:42.480 "name": "BaseBdev1", 00:09:42.480 "uuid": "d37c8b31-b2b1-5be4-8620-990b2f4ffa19", 00:09:42.480 "is_configured": true, 00:09:42.480 "data_offset": 2048, 00:09:42.480 "data_size": 63488 00:09:42.480 }, 00:09:42.480 { 00:09:42.480 "name": "BaseBdev2", 00:09:42.480 "uuid": "060b6090-1fc7-5630-b74a-d5f1e8c4e88a", 00:09:42.480 "is_configured": true, 00:09:42.480 "data_offset": 2048, 00:09:42.480 "data_size": 63488 00:09:42.480 }, 00:09:42.480 { 00:09:42.480 "name": "BaseBdev3", 00:09:42.480 "uuid": "78a55f81-99e7-50eb-ba2e-a12501e4dee5", 00:09:42.480 "is_configured": true, 00:09:42.480 "data_offset": 2048, 00:09:42.480 "data_size": 63488 00:09:42.480 } 00:09:42.480 ] 00:09:42.480 }' 00:09:42.480 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.480 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.740 [2024-09-30 12:26:54.501995] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.740 [2024-09-30 12:26:54.502099] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.740 [2024-09-30 12:26:54.504712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.740 [2024-09-30 12:26:54.504842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.740 [2024-09-30 12:26:54.504909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.740 [2024-09-30 12:26:54.504959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:42.740 { 00:09:42.740 "results": [ 00:09:42.740 { 00:09:42.740 "job": "raid_bdev1", 00:09:42.740 "core_mask": "0x1", 00:09:42.740 "workload": "randrw", 00:09:42.740 "percentage": 50, 00:09:42.740 "status": "finished", 00:09:42.740 "queue_depth": 1, 00:09:42.740 "io_size": 131072, 00:09:42.740 "runtime": 1.336816, 00:09:42.740 "iops": 16236.340678148676, 00:09:42.740 "mibps": 2029.5425847685844, 00:09:42.740 "io_failed": 1, 00:09:42.740 "io_timeout": 0, 00:09:42.740 "avg_latency_us": 85.60368400744044, 00:09:42.740 "min_latency_us": 25.9353711790393, 00:09:42.740 "max_latency_us": 1337.907423580786 00:09:42.740 } 00:09:42.740 ], 00:09:42.740 "core_count": 1 00:09:42.740 } 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65210 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65210 ']' 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65210 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65210 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.740 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.741 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65210' 00:09:42.741 killing process with pid 65210 00:09:42.741 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65210 00:09:42.741 [2024-09-30 12:26:54.548792] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.741 12:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65210 00:09:43.000 [2024-09-30 12:26:54.769317] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.448 12:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UOPhDUFUMx 00:09:44.448 12:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:44.448 12:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:44.448 12:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:44.448 12:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:44.448 ************************************ 00:09:44.448 END TEST raid_read_error_test 00:09:44.448 ************************************ 00:09:44.448 12:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.448 12:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:44.448 12:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:44.448 00:09:44.448 real 0m4.575s 00:09:44.448 user 0m5.322s 00:09:44.448 sys 0m0.584s 00:09:44.448 12:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.448 12:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.448 12:26:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:44.448 12:26:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:44.448 12:26:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.448 12:26:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.448 ************************************ 00:09:44.448 START TEST raid_write_error_test 00:09:44.448 ************************************ 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wrAonVSL0u 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65355 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65355 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65355 ']' 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.448 12:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.448 [2024-09-30 12:26:56.241563] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:44.448 [2024-09-30 12:26:56.241791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65355 ] 00:09:44.707 [2024-09-30 12:26:56.404508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.707 [2024-09-30 12:26:56.599950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.967 [2024-09-30 12:26:56.788331] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.967 [2024-09-30 12:26:56.788466] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.225 BaseBdev1_malloc 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.225 true 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.225 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.485 [2024-09-30 12:26:57.121522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:45.485 [2024-09-30 12:26:57.121584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.485 [2024-09-30 12:26:57.121605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:45.485 [2024-09-30 12:26:57.121618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.485 [2024-09-30 12:26:57.123840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.485 [2024-09-30 12:26:57.123936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:45.485 BaseBdev1 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.485 BaseBdev2_malloc 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.485 true 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.485 [2024-09-30 12:26:57.196098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:45.485 [2024-09-30 12:26:57.196161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.485 [2024-09-30 12:26:57.196180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:45.485 [2024-09-30 12:26:57.196194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.485 [2024-09-30 12:26:57.198287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.485 [2024-09-30 12:26:57.198334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:45.485 BaseBdev2 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.485 BaseBdev3_malloc 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.485 true 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.485 [2024-09-30 12:26:57.262105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:45.485 [2024-09-30 12:26:57.262218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.485 [2024-09-30 12:26:57.262243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:45.485 [2024-09-30 12:26:57.262256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.485 [2024-09-30 12:26:57.264396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.485 [2024-09-30 12:26:57.264441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:45.485 BaseBdev3 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.485 [2024-09-30 12:26:57.274170] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.485 [2024-09-30 12:26:57.276009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.485 [2024-09-30 12:26:57.276098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.485 [2024-09-30 12:26:57.276310] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:45.485 [2024-09-30 12:26:57.276324] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:45.485 [2024-09-30 12:26:57.276591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:45.485 [2024-09-30 12:26:57.276748] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:45.485 [2024-09-30 12:26:57.276776] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:45.485 [2024-09-30 12:26:57.276948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:45.485 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.486 "name": "raid_bdev1", 00:09:45.486 "uuid": "1fcd50a4-5e87-456c-851d-13ea9eaca443", 00:09:45.486 "strip_size_kb": 64, 00:09:45.486 "state": "online", 00:09:45.486 "raid_level": "raid0", 00:09:45.486 "superblock": true, 00:09:45.486 "num_base_bdevs": 3, 00:09:45.486 "num_base_bdevs_discovered": 3, 00:09:45.486 "num_base_bdevs_operational": 3, 00:09:45.486 "base_bdevs_list": [ 00:09:45.486 { 00:09:45.486 "name": "BaseBdev1", 00:09:45.486 "uuid": "f57ef00f-22c6-5540-b4be-c83886745b88", 00:09:45.486 "is_configured": true, 00:09:45.486 "data_offset": 2048, 00:09:45.486 "data_size": 63488 00:09:45.486 }, 00:09:45.486 { 00:09:45.486 "name": "BaseBdev2", 00:09:45.486 "uuid": "bc7d8be4-3ef0-5a0c-90ab-c902c4258ca2", 00:09:45.486 "is_configured": true, 00:09:45.486 "data_offset": 2048, 00:09:45.486 "data_size": 63488 00:09:45.486 }, 00:09:45.486 { 00:09:45.486 "name": "BaseBdev3", 00:09:45.486 "uuid": "e2111a08-5c26-529c-b8af-948822188c63", 00:09:45.486 "is_configured": true, 00:09:45.486 "data_offset": 2048, 00:09:45.486 "data_size": 63488 00:09:45.486 } 00:09:45.486 ] 00:09:45.486 }' 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.486 12:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.054 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:46.054 12:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:46.054 [2024-09-30 12:26:57.754623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.994 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.995 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.995 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.995 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.995 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.995 12:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.995 12:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.995 12:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.995 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.995 "name": "raid_bdev1", 00:09:46.995 "uuid": "1fcd50a4-5e87-456c-851d-13ea9eaca443", 00:09:46.995 "strip_size_kb": 64, 00:09:46.995 "state": "online", 00:09:46.995 "raid_level": "raid0", 00:09:46.995 "superblock": true, 00:09:46.995 "num_base_bdevs": 3, 00:09:46.995 "num_base_bdevs_discovered": 3, 00:09:46.995 "num_base_bdevs_operational": 3, 00:09:46.995 "base_bdevs_list": [ 00:09:46.995 { 00:09:46.995 "name": "BaseBdev1", 00:09:46.995 "uuid": "f57ef00f-22c6-5540-b4be-c83886745b88", 00:09:46.995 "is_configured": true, 00:09:46.995 "data_offset": 2048, 00:09:46.995 "data_size": 63488 00:09:46.995 }, 00:09:46.995 { 00:09:46.995 "name": "BaseBdev2", 00:09:46.995 "uuid": "bc7d8be4-3ef0-5a0c-90ab-c902c4258ca2", 00:09:46.995 "is_configured": true, 00:09:46.995 "data_offset": 2048, 00:09:46.995 "data_size": 63488 00:09:46.995 }, 00:09:46.995 { 00:09:46.995 "name": "BaseBdev3", 00:09:46.995 "uuid": "e2111a08-5c26-529c-b8af-948822188c63", 00:09:46.995 "is_configured": true, 00:09:46.995 "data_offset": 2048, 00:09:46.995 "data_size": 63488 00:09:46.995 } 00:09:46.995 ] 00:09:46.995 }' 00:09:46.995 12:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.995 12:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.321 12:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.321 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.321 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.321 [2024-09-30 12:26:59.120930] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.321 [2024-09-30 12:26:59.121034] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.321 [2024-09-30 12:26:59.123611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.321 [2024-09-30 12:26:59.123705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.321 [2024-09-30 12:26:59.123780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.321 [2024-09-30 12:26:59.123839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:47.321 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.321 12:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65355 00:09:47.322 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65355 ']' 00:09:47.322 { 00:09:47.322 "results": [ 00:09:47.322 { 00:09:47.322 "job": "raid_bdev1", 00:09:47.322 "core_mask": "0x1", 00:09:47.322 "workload": "randrw", 00:09:47.322 "percentage": 50, 00:09:47.322 "status": "finished", 00:09:47.322 "queue_depth": 1, 00:09:47.322 "io_size": 131072, 00:09:47.322 "runtime": 1.367225, 00:09:47.322 "iops": 16072.702005887839, 00:09:47.322 "mibps": 2009.0877507359799, 00:09:47.322 "io_failed": 1, 00:09:47.322 "io_timeout": 0, 00:09:47.322 "avg_latency_us": 86.5295514121797, 00:09:47.322 "min_latency_us": 25.823580786026202, 00:09:47.322 "max_latency_us": 1402.2986899563318 00:09:47.322 } 00:09:47.322 ], 00:09:47.322 "core_count": 1 00:09:47.322 } 00:09:47.322 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65355 00:09:47.322 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:47.322 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.322 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65355 00:09:47.322 killing process with pid 65355 00:09:47.322 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:47.322 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:47.322 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65355' 00:09:47.322 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65355 00:09:47.322 [2024-09-30 12:26:59.160719] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.322 12:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65355 00:09:47.583 [2024-09-30 12:26:59.385147] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.963 12:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:48.963 12:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wrAonVSL0u 00:09:48.963 12:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:48.963 12:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:48.963 12:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:48.963 ************************************ 00:09:48.963 END TEST raid_write_error_test 00:09:48.963 ************************************ 00:09:48.963 12:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.964 12:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:48.964 12:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:48.964 00:09:48.964 real 0m4.546s 00:09:48.964 user 0m5.290s 00:09:48.964 sys 0m0.559s 00:09:48.964 12:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.964 12:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.964 12:27:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:48.964 12:27:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:48.964 12:27:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:48.964 12:27:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.964 12:27:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.964 ************************************ 00:09:48.964 START TEST raid_state_function_test 00:09:48.964 ************************************ 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65499 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65499' 00:09:48.964 Process raid pid: 65499 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65499 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65499 ']' 00:09:48.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.964 12:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.964 [2024-09-30 12:27:00.850696] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:48.964 [2024-09-30 12:27:00.850846] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.223 [2024-09-30 12:27:00.995115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.482 [2024-09-30 12:27:01.204076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.741 [2024-09-30 12:27:01.404237] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.741 [2024-09-30 12:27:01.404277] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.000 [2024-09-30 12:27:01.695583] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.000 [2024-09-30 12:27:01.695647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.000 [2024-09-30 12:27:01.695659] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.000 [2024-09-30 12:27:01.695670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.000 [2024-09-30 12:27:01.695678] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.000 [2024-09-30 12:27:01.695689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.000 "name": "Existed_Raid", 00:09:50.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.000 "strip_size_kb": 64, 00:09:50.000 "state": "configuring", 00:09:50.000 "raid_level": "concat", 00:09:50.000 "superblock": false, 00:09:50.000 "num_base_bdevs": 3, 00:09:50.000 "num_base_bdevs_discovered": 0, 00:09:50.000 "num_base_bdevs_operational": 3, 00:09:50.000 "base_bdevs_list": [ 00:09:50.000 { 00:09:50.000 "name": "BaseBdev1", 00:09:50.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.000 "is_configured": false, 00:09:50.000 "data_offset": 0, 00:09:50.000 "data_size": 0 00:09:50.000 }, 00:09:50.000 { 00:09:50.000 "name": "BaseBdev2", 00:09:50.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.000 "is_configured": false, 00:09:50.000 "data_offset": 0, 00:09:50.000 "data_size": 0 00:09:50.000 }, 00:09:50.000 { 00:09:50.000 "name": "BaseBdev3", 00:09:50.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.000 "is_configured": false, 00:09:50.000 "data_offset": 0, 00:09:50.000 "data_size": 0 00:09:50.000 } 00:09:50.000 ] 00:09:50.000 }' 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.000 12:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.260 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:50.260 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.260 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.260 [2024-09-30 12:27:02.134716] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:50.260 [2024-09-30 12:27:02.134839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:50.260 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.260 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.260 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.260 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.260 [2024-09-30 12:27:02.146725] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.260 [2024-09-30 12:27:02.146843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.260 [2024-09-30 12:27:02.146877] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.260 [2024-09-30 12:27:02.146911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.260 [2024-09-30 12:27:02.146949] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.260 [2024-09-30 12:27:02.146994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.260 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.260 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.260 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.260 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.520 [2024-09-30 12:27:02.228893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.520 BaseBdev1 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.520 [ 00:09:50.520 { 00:09:50.520 "name": "BaseBdev1", 00:09:50.520 "aliases": [ 00:09:50.520 "850b22db-4a31-4961-867d-73d351f069e8" 00:09:50.520 ], 00:09:50.520 "product_name": "Malloc disk", 00:09:50.520 "block_size": 512, 00:09:50.520 "num_blocks": 65536, 00:09:50.520 "uuid": "850b22db-4a31-4961-867d-73d351f069e8", 00:09:50.520 "assigned_rate_limits": { 00:09:50.520 "rw_ios_per_sec": 0, 00:09:50.520 "rw_mbytes_per_sec": 0, 00:09:50.520 "r_mbytes_per_sec": 0, 00:09:50.520 "w_mbytes_per_sec": 0 00:09:50.520 }, 00:09:50.520 "claimed": true, 00:09:50.520 "claim_type": "exclusive_write", 00:09:50.520 "zoned": false, 00:09:50.520 "supported_io_types": { 00:09:50.520 "read": true, 00:09:50.520 "write": true, 00:09:50.520 "unmap": true, 00:09:50.520 "flush": true, 00:09:50.520 "reset": true, 00:09:50.520 "nvme_admin": false, 00:09:50.520 "nvme_io": false, 00:09:50.520 "nvme_io_md": false, 00:09:50.520 "write_zeroes": true, 00:09:50.520 "zcopy": true, 00:09:50.520 "get_zone_info": false, 00:09:50.520 "zone_management": false, 00:09:50.520 "zone_append": false, 00:09:50.520 "compare": false, 00:09:50.520 "compare_and_write": false, 00:09:50.520 "abort": true, 00:09:50.520 "seek_hole": false, 00:09:50.520 "seek_data": false, 00:09:50.520 "copy": true, 00:09:50.520 "nvme_iov_md": false 00:09:50.520 }, 00:09:50.520 "memory_domains": [ 00:09:50.520 { 00:09:50.520 "dma_device_id": "system", 00:09:50.520 "dma_device_type": 1 00:09:50.520 }, 00:09:50.520 { 00:09:50.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.520 "dma_device_type": 2 00:09:50.520 } 00:09:50.520 ], 00:09:50.520 "driver_specific": {} 00:09:50.520 } 00:09:50.520 ] 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.520 "name": "Existed_Raid", 00:09:50.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.520 "strip_size_kb": 64, 00:09:50.520 "state": "configuring", 00:09:50.520 "raid_level": "concat", 00:09:50.520 "superblock": false, 00:09:50.520 "num_base_bdevs": 3, 00:09:50.520 "num_base_bdevs_discovered": 1, 00:09:50.520 "num_base_bdevs_operational": 3, 00:09:50.520 "base_bdevs_list": [ 00:09:50.520 { 00:09:50.520 "name": "BaseBdev1", 00:09:50.520 "uuid": "850b22db-4a31-4961-867d-73d351f069e8", 00:09:50.520 "is_configured": true, 00:09:50.520 "data_offset": 0, 00:09:50.520 "data_size": 65536 00:09:50.520 }, 00:09:50.520 { 00:09:50.520 "name": "BaseBdev2", 00:09:50.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.520 "is_configured": false, 00:09:50.520 "data_offset": 0, 00:09:50.520 "data_size": 0 00:09:50.520 }, 00:09:50.520 { 00:09:50.520 "name": "BaseBdev3", 00:09:50.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.520 "is_configured": false, 00:09:50.520 "data_offset": 0, 00:09:50.520 "data_size": 0 00:09:50.520 } 00:09:50.520 ] 00:09:50.520 }' 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.520 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.780 [2024-09-30 12:27:02.660244] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:50.780 [2024-09-30 12:27:02.660369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.780 [2024-09-30 12:27:02.668263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.780 [2024-09-30 12:27:02.670083] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.780 [2024-09-30 12:27:02.670147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.780 [2024-09-30 12:27:02.670159] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.780 [2024-09-30 12:27:02.670171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.780 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.040 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.040 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.040 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.040 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.040 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.040 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.040 "name": "Existed_Raid", 00:09:51.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.040 "strip_size_kb": 64, 00:09:51.040 "state": "configuring", 00:09:51.040 "raid_level": "concat", 00:09:51.040 "superblock": false, 00:09:51.040 "num_base_bdevs": 3, 00:09:51.040 "num_base_bdevs_discovered": 1, 00:09:51.040 "num_base_bdevs_operational": 3, 00:09:51.040 "base_bdevs_list": [ 00:09:51.040 { 00:09:51.040 "name": "BaseBdev1", 00:09:51.040 "uuid": "850b22db-4a31-4961-867d-73d351f069e8", 00:09:51.040 "is_configured": true, 00:09:51.040 "data_offset": 0, 00:09:51.040 "data_size": 65536 00:09:51.040 }, 00:09:51.040 { 00:09:51.040 "name": "BaseBdev2", 00:09:51.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.040 "is_configured": false, 00:09:51.040 "data_offset": 0, 00:09:51.040 "data_size": 0 00:09:51.040 }, 00:09:51.040 { 00:09:51.040 "name": "BaseBdev3", 00:09:51.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.040 "is_configured": false, 00:09:51.040 "data_offset": 0, 00:09:51.040 "data_size": 0 00:09:51.040 } 00:09:51.040 ] 00:09:51.040 }' 00:09:51.040 12:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.040 12:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.300 [2024-09-30 12:27:03.180769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.300 BaseBdev2 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.300 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.560 [ 00:09:51.560 { 00:09:51.560 "name": "BaseBdev2", 00:09:51.560 "aliases": [ 00:09:51.560 "b12fa509-d99d-43ca-ac6d-5fcfc933c588" 00:09:51.560 ], 00:09:51.560 "product_name": "Malloc disk", 00:09:51.560 "block_size": 512, 00:09:51.560 "num_blocks": 65536, 00:09:51.560 "uuid": "b12fa509-d99d-43ca-ac6d-5fcfc933c588", 00:09:51.560 "assigned_rate_limits": { 00:09:51.560 "rw_ios_per_sec": 0, 00:09:51.560 "rw_mbytes_per_sec": 0, 00:09:51.560 "r_mbytes_per_sec": 0, 00:09:51.560 "w_mbytes_per_sec": 0 00:09:51.560 }, 00:09:51.560 "claimed": true, 00:09:51.560 "claim_type": "exclusive_write", 00:09:51.560 "zoned": false, 00:09:51.560 "supported_io_types": { 00:09:51.560 "read": true, 00:09:51.560 "write": true, 00:09:51.560 "unmap": true, 00:09:51.560 "flush": true, 00:09:51.560 "reset": true, 00:09:51.560 "nvme_admin": false, 00:09:51.560 "nvme_io": false, 00:09:51.560 "nvme_io_md": false, 00:09:51.560 "write_zeroes": true, 00:09:51.560 "zcopy": true, 00:09:51.560 "get_zone_info": false, 00:09:51.560 "zone_management": false, 00:09:51.560 "zone_append": false, 00:09:51.560 "compare": false, 00:09:51.560 "compare_and_write": false, 00:09:51.560 "abort": true, 00:09:51.560 "seek_hole": false, 00:09:51.560 "seek_data": false, 00:09:51.560 "copy": true, 00:09:51.560 "nvme_iov_md": false 00:09:51.560 }, 00:09:51.560 "memory_domains": [ 00:09:51.560 { 00:09:51.560 "dma_device_id": "system", 00:09:51.560 "dma_device_type": 1 00:09:51.560 }, 00:09:51.560 { 00:09:51.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.560 "dma_device_type": 2 00:09:51.561 } 00:09:51.561 ], 00:09:51.561 "driver_specific": {} 00:09:51.561 } 00:09:51.561 ] 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.561 "name": "Existed_Raid", 00:09:51.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.561 "strip_size_kb": 64, 00:09:51.561 "state": "configuring", 00:09:51.561 "raid_level": "concat", 00:09:51.561 "superblock": false, 00:09:51.561 "num_base_bdevs": 3, 00:09:51.561 "num_base_bdevs_discovered": 2, 00:09:51.561 "num_base_bdevs_operational": 3, 00:09:51.561 "base_bdevs_list": [ 00:09:51.561 { 00:09:51.561 "name": "BaseBdev1", 00:09:51.561 "uuid": "850b22db-4a31-4961-867d-73d351f069e8", 00:09:51.561 "is_configured": true, 00:09:51.561 "data_offset": 0, 00:09:51.561 "data_size": 65536 00:09:51.561 }, 00:09:51.561 { 00:09:51.561 "name": "BaseBdev2", 00:09:51.561 "uuid": "b12fa509-d99d-43ca-ac6d-5fcfc933c588", 00:09:51.561 "is_configured": true, 00:09:51.561 "data_offset": 0, 00:09:51.561 "data_size": 65536 00:09:51.561 }, 00:09:51.561 { 00:09:51.561 "name": "BaseBdev3", 00:09:51.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.561 "is_configured": false, 00:09:51.561 "data_offset": 0, 00:09:51.561 "data_size": 0 00:09:51.561 } 00:09:51.561 ] 00:09:51.561 }' 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.561 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.821 [2024-09-30 12:27:03.680226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.821 [2024-09-30 12:27:03.680332] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:51.821 [2024-09-30 12:27:03.680366] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:51.821 [2024-09-30 12:27:03.680705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:51.821 [2024-09-30 12:27:03.680964] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:51.821 [2024-09-30 12:27:03.680982] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:51.821 [2024-09-30 12:27:03.681266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.821 BaseBdev3 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.821 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.821 [ 00:09:51.821 { 00:09:51.821 "name": "BaseBdev3", 00:09:51.821 "aliases": [ 00:09:51.821 "f581ce96-e5f3-4654-bf43-4b21c3f9438d" 00:09:51.821 ], 00:09:51.821 "product_name": "Malloc disk", 00:09:51.821 "block_size": 512, 00:09:51.821 "num_blocks": 65536, 00:09:51.821 "uuid": "f581ce96-e5f3-4654-bf43-4b21c3f9438d", 00:09:51.821 "assigned_rate_limits": { 00:09:51.821 "rw_ios_per_sec": 0, 00:09:51.821 "rw_mbytes_per_sec": 0, 00:09:51.821 "r_mbytes_per_sec": 0, 00:09:51.821 "w_mbytes_per_sec": 0 00:09:51.821 }, 00:09:51.821 "claimed": true, 00:09:51.821 "claim_type": "exclusive_write", 00:09:51.821 "zoned": false, 00:09:51.821 "supported_io_types": { 00:09:51.821 "read": true, 00:09:51.821 "write": true, 00:09:51.821 "unmap": true, 00:09:51.821 "flush": true, 00:09:51.821 "reset": true, 00:09:51.821 "nvme_admin": false, 00:09:51.821 "nvme_io": false, 00:09:51.821 "nvme_io_md": false, 00:09:51.821 "write_zeroes": true, 00:09:51.821 "zcopy": true, 00:09:51.821 "get_zone_info": false, 00:09:51.821 "zone_management": false, 00:09:51.821 "zone_append": false, 00:09:51.821 "compare": false, 00:09:51.821 "compare_and_write": false, 00:09:51.821 "abort": true, 00:09:51.821 "seek_hole": false, 00:09:52.081 "seek_data": false, 00:09:52.081 "copy": true, 00:09:52.081 "nvme_iov_md": false 00:09:52.081 }, 00:09:52.081 "memory_domains": [ 00:09:52.081 { 00:09:52.081 "dma_device_id": "system", 00:09:52.081 "dma_device_type": 1 00:09:52.081 }, 00:09:52.081 { 00:09:52.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.081 "dma_device_type": 2 00:09:52.081 } 00:09:52.081 ], 00:09:52.081 "driver_specific": {} 00:09:52.081 } 00:09:52.081 ] 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.081 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.082 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.082 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.082 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.082 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.082 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.082 "name": "Existed_Raid", 00:09:52.082 "uuid": "717bc97c-d728-4206-b96c-300f17f0b574", 00:09:52.082 "strip_size_kb": 64, 00:09:52.082 "state": "online", 00:09:52.082 "raid_level": "concat", 00:09:52.082 "superblock": false, 00:09:52.082 "num_base_bdevs": 3, 00:09:52.082 "num_base_bdevs_discovered": 3, 00:09:52.082 "num_base_bdevs_operational": 3, 00:09:52.082 "base_bdevs_list": [ 00:09:52.082 { 00:09:52.082 "name": "BaseBdev1", 00:09:52.082 "uuid": "850b22db-4a31-4961-867d-73d351f069e8", 00:09:52.082 "is_configured": true, 00:09:52.082 "data_offset": 0, 00:09:52.082 "data_size": 65536 00:09:52.082 }, 00:09:52.082 { 00:09:52.082 "name": "BaseBdev2", 00:09:52.082 "uuid": "b12fa509-d99d-43ca-ac6d-5fcfc933c588", 00:09:52.082 "is_configured": true, 00:09:52.082 "data_offset": 0, 00:09:52.082 "data_size": 65536 00:09:52.082 }, 00:09:52.082 { 00:09:52.082 "name": "BaseBdev3", 00:09:52.082 "uuid": "f581ce96-e5f3-4654-bf43-4b21c3f9438d", 00:09:52.082 "is_configured": true, 00:09:52.082 "data_offset": 0, 00:09:52.082 "data_size": 65536 00:09:52.082 } 00:09:52.082 ] 00:09:52.082 }' 00:09:52.082 12:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.082 12:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:52.342 [2024-09-30 12:27:04.155815] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:52.342 "name": "Existed_Raid", 00:09:52.342 "aliases": [ 00:09:52.342 "717bc97c-d728-4206-b96c-300f17f0b574" 00:09:52.342 ], 00:09:52.342 "product_name": "Raid Volume", 00:09:52.342 "block_size": 512, 00:09:52.342 "num_blocks": 196608, 00:09:52.342 "uuid": "717bc97c-d728-4206-b96c-300f17f0b574", 00:09:52.342 "assigned_rate_limits": { 00:09:52.342 "rw_ios_per_sec": 0, 00:09:52.342 "rw_mbytes_per_sec": 0, 00:09:52.342 "r_mbytes_per_sec": 0, 00:09:52.342 "w_mbytes_per_sec": 0 00:09:52.342 }, 00:09:52.342 "claimed": false, 00:09:52.342 "zoned": false, 00:09:52.342 "supported_io_types": { 00:09:52.342 "read": true, 00:09:52.342 "write": true, 00:09:52.342 "unmap": true, 00:09:52.342 "flush": true, 00:09:52.342 "reset": true, 00:09:52.342 "nvme_admin": false, 00:09:52.342 "nvme_io": false, 00:09:52.342 "nvme_io_md": false, 00:09:52.342 "write_zeroes": true, 00:09:52.342 "zcopy": false, 00:09:52.342 "get_zone_info": false, 00:09:52.342 "zone_management": false, 00:09:52.342 "zone_append": false, 00:09:52.342 "compare": false, 00:09:52.342 "compare_and_write": false, 00:09:52.342 "abort": false, 00:09:52.342 "seek_hole": false, 00:09:52.342 "seek_data": false, 00:09:52.342 "copy": false, 00:09:52.342 "nvme_iov_md": false 00:09:52.342 }, 00:09:52.342 "memory_domains": [ 00:09:52.342 { 00:09:52.342 "dma_device_id": "system", 00:09:52.342 "dma_device_type": 1 00:09:52.342 }, 00:09:52.342 { 00:09:52.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.342 "dma_device_type": 2 00:09:52.342 }, 00:09:52.342 { 00:09:52.342 "dma_device_id": "system", 00:09:52.342 "dma_device_type": 1 00:09:52.342 }, 00:09:52.342 { 00:09:52.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.342 "dma_device_type": 2 00:09:52.342 }, 00:09:52.342 { 00:09:52.342 "dma_device_id": "system", 00:09:52.342 "dma_device_type": 1 00:09:52.342 }, 00:09:52.342 { 00:09:52.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.342 "dma_device_type": 2 00:09:52.342 } 00:09:52.342 ], 00:09:52.342 "driver_specific": { 00:09:52.342 "raid": { 00:09:52.342 "uuid": "717bc97c-d728-4206-b96c-300f17f0b574", 00:09:52.342 "strip_size_kb": 64, 00:09:52.342 "state": "online", 00:09:52.342 "raid_level": "concat", 00:09:52.342 "superblock": false, 00:09:52.342 "num_base_bdevs": 3, 00:09:52.342 "num_base_bdevs_discovered": 3, 00:09:52.342 "num_base_bdevs_operational": 3, 00:09:52.342 "base_bdevs_list": [ 00:09:52.342 { 00:09:52.342 "name": "BaseBdev1", 00:09:52.342 "uuid": "850b22db-4a31-4961-867d-73d351f069e8", 00:09:52.342 "is_configured": true, 00:09:52.342 "data_offset": 0, 00:09:52.342 "data_size": 65536 00:09:52.342 }, 00:09:52.342 { 00:09:52.342 "name": "BaseBdev2", 00:09:52.342 "uuid": "b12fa509-d99d-43ca-ac6d-5fcfc933c588", 00:09:52.342 "is_configured": true, 00:09:52.342 "data_offset": 0, 00:09:52.342 "data_size": 65536 00:09:52.342 }, 00:09:52.342 { 00:09:52.342 "name": "BaseBdev3", 00:09:52.342 "uuid": "f581ce96-e5f3-4654-bf43-4b21c3f9438d", 00:09:52.342 "is_configured": true, 00:09:52.342 "data_offset": 0, 00:09:52.342 "data_size": 65536 00:09:52.342 } 00:09:52.342 ] 00:09:52.342 } 00:09:52.342 } 00:09:52.342 }' 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:52.342 BaseBdev2 00:09:52.342 BaseBdev3' 00:09:52.342 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.602 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:52.602 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.603 [2024-09-30 12:27:04.391073] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:52.603 [2024-09-30 12:27:04.391102] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.603 [2024-09-30 12:27:04.391161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.603 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.862 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.862 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.862 "name": "Existed_Raid", 00:09:52.862 "uuid": "717bc97c-d728-4206-b96c-300f17f0b574", 00:09:52.862 "strip_size_kb": 64, 00:09:52.862 "state": "offline", 00:09:52.862 "raid_level": "concat", 00:09:52.862 "superblock": false, 00:09:52.862 "num_base_bdevs": 3, 00:09:52.862 "num_base_bdevs_discovered": 2, 00:09:52.862 "num_base_bdevs_operational": 2, 00:09:52.862 "base_bdevs_list": [ 00:09:52.862 { 00:09:52.862 "name": null, 00:09:52.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.863 "is_configured": false, 00:09:52.863 "data_offset": 0, 00:09:52.863 "data_size": 65536 00:09:52.863 }, 00:09:52.863 { 00:09:52.863 "name": "BaseBdev2", 00:09:52.863 "uuid": "b12fa509-d99d-43ca-ac6d-5fcfc933c588", 00:09:52.863 "is_configured": true, 00:09:52.863 "data_offset": 0, 00:09:52.863 "data_size": 65536 00:09:52.863 }, 00:09:52.863 { 00:09:52.863 "name": "BaseBdev3", 00:09:52.863 "uuid": "f581ce96-e5f3-4654-bf43-4b21c3f9438d", 00:09:52.863 "is_configured": true, 00:09:52.863 "data_offset": 0, 00:09:52.863 "data_size": 65536 00:09:52.863 } 00:09:52.863 ] 00:09:52.863 }' 00:09:52.863 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.863 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.122 12:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.122 [2024-09-30 12:27:04.961227] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:53.381 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.382 [2024-09-30 12:27:05.106614] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:53.382 [2024-09-30 12:27:05.106746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.382 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.642 BaseBdev2 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.642 [ 00:09:53.642 { 00:09:53.642 "name": "BaseBdev2", 00:09:53.642 "aliases": [ 00:09:53.642 "54805eac-aeef-4f1f-b75c-477edd5ef234" 00:09:53.642 ], 00:09:53.642 "product_name": "Malloc disk", 00:09:53.642 "block_size": 512, 00:09:53.642 "num_blocks": 65536, 00:09:53.642 "uuid": "54805eac-aeef-4f1f-b75c-477edd5ef234", 00:09:53.642 "assigned_rate_limits": { 00:09:53.642 "rw_ios_per_sec": 0, 00:09:53.642 "rw_mbytes_per_sec": 0, 00:09:53.642 "r_mbytes_per_sec": 0, 00:09:53.642 "w_mbytes_per_sec": 0 00:09:53.642 }, 00:09:53.642 "claimed": false, 00:09:53.642 "zoned": false, 00:09:53.642 "supported_io_types": { 00:09:53.642 "read": true, 00:09:53.642 "write": true, 00:09:53.642 "unmap": true, 00:09:53.642 "flush": true, 00:09:53.642 "reset": true, 00:09:53.642 "nvme_admin": false, 00:09:53.642 "nvme_io": false, 00:09:53.642 "nvme_io_md": false, 00:09:53.642 "write_zeroes": true, 00:09:53.642 "zcopy": true, 00:09:53.642 "get_zone_info": false, 00:09:53.642 "zone_management": false, 00:09:53.642 "zone_append": false, 00:09:53.642 "compare": false, 00:09:53.642 "compare_and_write": false, 00:09:53.642 "abort": true, 00:09:53.642 "seek_hole": false, 00:09:53.642 "seek_data": false, 00:09:53.642 "copy": true, 00:09:53.642 "nvme_iov_md": false 00:09:53.642 }, 00:09:53.642 "memory_domains": [ 00:09:53.642 { 00:09:53.642 "dma_device_id": "system", 00:09:53.642 "dma_device_type": 1 00:09:53.642 }, 00:09:53.642 { 00:09:53.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.642 "dma_device_type": 2 00:09:53.642 } 00:09:53.642 ], 00:09:53.642 "driver_specific": {} 00:09:53.642 } 00:09:53.642 ] 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.642 BaseBdev3 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.642 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.642 [ 00:09:53.642 { 00:09:53.642 "name": "BaseBdev3", 00:09:53.642 "aliases": [ 00:09:53.642 "fdc0fc36-7d46-4aef-904e-cfe9a4c4ab46" 00:09:53.642 ], 00:09:53.642 "product_name": "Malloc disk", 00:09:53.642 "block_size": 512, 00:09:53.642 "num_blocks": 65536, 00:09:53.642 "uuid": "fdc0fc36-7d46-4aef-904e-cfe9a4c4ab46", 00:09:53.642 "assigned_rate_limits": { 00:09:53.642 "rw_ios_per_sec": 0, 00:09:53.642 "rw_mbytes_per_sec": 0, 00:09:53.642 "r_mbytes_per_sec": 0, 00:09:53.642 "w_mbytes_per_sec": 0 00:09:53.642 }, 00:09:53.642 "claimed": false, 00:09:53.642 "zoned": false, 00:09:53.642 "supported_io_types": { 00:09:53.642 "read": true, 00:09:53.642 "write": true, 00:09:53.642 "unmap": true, 00:09:53.642 "flush": true, 00:09:53.642 "reset": true, 00:09:53.643 "nvme_admin": false, 00:09:53.643 "nvme_io": false, 00:09:53.643 "nvme_io_md": false, 00:09:53.643 "write_zeroes": true, 00:09:53.643 "zcopy": true, 00:09:53.643 "get_zone_info": false, 00:09:53.643 "zone_management": false, 00:09:53.643 "zone_append": false, 00:09:53.643 "compare": false, 00:09:53.643 "compare_and_write": false, 00:09:53.643 "abort": true, 00:09:53.643 "seek_hole": false, 00:09:53.643 "seek_data": false, 00:09:53.643 "copy": true, 00:09:53.643 "nvme_iov_md": false 00:09:53.643 }, 00:09:53.643 "memory_domains": [ 00:09:53.643 { 00:09:53.643 "dma_device_id": "system", 00:09:53.643 "dma_device_type": 1 00:09:53.643 }, 00:09:53.643 { 00:09:53.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.643 "dma_device_type": 2 00:09:53.643 } 00:09:53.643 ], 00:09:53.643 "driver_specific": {} 00:09:53.643 } 00:09:53.643 ] 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.643 [2024-09-30 12:27:05.417086] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.643 [2024-09-30 12:27:05.417181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.643 [2024-09-30 12:27:05.417244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.643 [2024-09-30 12:27:05.419008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.643 "name": "Existed_Raid", 00:09:53.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.643 "strip_size_kb": 64, 00:09:53.643 "state": "configuring", 00:09:53.643 "raid_level": "concat", 00:09:53.643 "superblock": false, 00:09:53.643 "num_base_bdevs": 3, 00:09:53.643 "num_base_bdevs_discovered": 2, 00:09:53.643 "num_base_bdevs_operational": 3, 00:09:53.643 "base_bdevs_list": [ 00:09:53.643 { 00:09:53.643 "name": "BaseBdev1", 00:09:53.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.643 "is_configured": false, 00:09:53.643 "data_offset": 0, 00:09:53.643 "data_size": 0 00:09:53.643 }, 00:09:53.643 { 00:09:53.643 "name": "BaseBdev2", 00:09:53.643 "uuid": "54805eac-aeef-4f1f-b75c-477edd5ef234", 00:09:53.643 "is_configured": true, 00:09:53.643 "data_offset": 0, 00:09:53.643 "data_size": 65536 00:09:53.643 }, 00:09:53.643 { 00:09:53.643 "name": "BaseBdev3", 00:09:53.643 "uuid": "fdc0fc36-7d46-4aef-904e-cfe9a4c4ab46", 00:09:53.643 "is_configured": true, 00:09:53.643 "data_offset": 0, 00:09:53.643 "data_size": 65536 00:09:53.643 } 00:09:53.643 ] 00:09:53.643 }' 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.643 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.212 [2024-09-30 12:27:05.896239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.212 "name": "Existed_Raid", 00:09:54.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.212 "strip_size_kb": 64, 00:09:54.212 "state": "configuring", 00:09:54.212 "raid_level": "concat", 00:09:54.212 "superblock": false, 00:09:54.212 "num_base_bdevs": 3, 00:09:54.212 "num_base_bdevs_discovered": 1, 00:09:54.212 "num_base_bdevs_operational": 3, 00:09:54.212 "base_bdevs_list": [ 00:09:54.212 { 00:09:54.212 "name": "BaseBdev1", 00:09:54.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.212 "is_configured": false, 00:09:54.212 "data_offset": 0, 00:09:54.212 "data_size": 0 00:09:54.212 }, 00:09:54.212 { 00:09:54.212 "name": null, 00:09:54.212 "uuid": "54805eac-aeef-4f1f-b75c-477edd5ef234", 00:09:54.212 "is_configured": false, 00:09:54.212 "data_offset": 0, 00:09:54.212 "data_size": 65536 00:09:54.212 }, 00:09:54.212 { 00:09:54.212 "name": "BaseBdev3", 00:09:54.212 "uuid": "fdc0fc36-7d46-4aef-904e-cfe9a4c4ab46", 00:09:54.212 "is_configured": true, 00:09:54.212 "data_offset": 0, 00:09:54.212 "data_size": 65536 00:09:54.212 } 00:09:54.212 ] 00:09:54.212 }' 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.212 12:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.472 [2024-09-30 12:27:06.351232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.472 BaseBdev1 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.472 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.731 [ 00:09:54.732 { 00:09:54.732 "name": "BaseBdev1", 00:09:54.732 "aliases": [ 00:09:54.732 "0c84bee4-f5b0-4cc7-981b-04673a93fe5d" 00:09:54.732 ], 00:09:54.732 "product_name": "Malloc disk", 00:09:54.732 "block_size": 512, 00:09:54.732 "num_blocks": 65536, 00:09:54.732 "uuid": "0c84bee4-f5b0-4cc7-981b-04673a93fe5d", 00:09:54.732 "assigned_rate_limits": { 00:09:54.732 "rw_ios_per_sec": 0, 00:09:54.732 "rw_mbytes_per_sec": 0, 00:09:54.732 "r_mbytes_per_sec": 0, 00:09:54.732 "w_mbytes_per_sec": 0 00:09:54.732 }, 00:09:54.732 "claimed": true, 00:09:54.732 "claim_type": "exclusive_write", 00:09:54.732 "zoned": false, 00:09:54.732 "supported_io_types": { 00:09:54.732 "read": true, 00:09:54.732 "write": true, 00:09:54.732 "unmap": true, 00:09:54.732 "flush": true, 00:09:54.732 "reset": true, 00:09:54.732 "nvme_admin": false, 00:09:54.732 "nvme_io": false, 00:09:54.732 "nvme_io_md": false, 00:09:54.732 "write_zeroes": true, 00:09:54.732 "zcopy": true, 00:09:54.732 "get_zone_info": false, 00:09:54.732 "zone_management": false, 00:09:54.732 "zone_append": false, 00:09:54.732 "compare": false, 00:09:54.732 "compare_and_write": false, 00:09:54.732 "abort": true, 00:09:54.732 "seek_hole": false, 00:09:54.732 "seek_data": false, 00:09:54.732 "copy": true, 00:09:54.732 "nvme_iov_md": false 00:09:54.732 }, 00:09:54.732 "memory_domains": [ 00:09:54.732 { 00:09:54.732 "dma_device_id": "system", 00:09:54.732 "dma_device_type": 1 00:09:54.732 }, 00:09:54.732 { 00:09:54.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.732 "dma_device_type": 2 00:09:54.732 } 00:09:54.732 ], 00:09:54.732 "driver_specific": {} 00:09:54.732 } 00:09:54.732 ] 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.732 "name": "Existed_Raid", 00:09:54.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.732 "strip_size_kb": 64, 00:09:54.732 "state": "configuring", 00:09:54.732 "raid_level": "concat", 00:09:54.732 "superblock": false, 00:09:54.732 "num_base_bdevs": 3, 00:09:54.732 "num_base_bdevs_discovered": 2, 00:09:54.732 "num_base_bdevs_operational": 3, 00:09:54.732 "base_bdevs_list": [ 00:09:54.732 { 00:09:54.732 "name": "BaseBdev1", 00:09:54.732 "uuid": "0c84bee4-f5b0-4cc7-981b-04673a93fe5d", 00:09:54.732 "is_configured": true, 00:09:54.732 "data_offset": 0, 00:09:54.732 "data_size": 65536 00:09:54.732 }, 00:09:54.732 { 00:09:54.732 "name": null, 00:09:54.732 "uuid": "54805eac-aeef-4f1f-b75c-477edd5ef234", 00:09:54.732 "is_configured": false, 00:09:54.732 "data_offset": 0, 00:09:54.732 "data_size": 65536 00:09:54.732 }, 00:09:54.732 { 00:09:54.732 "name": "BaseBdev3", 00:09:54.732 "uuid": "fdc0fc36-7d46-4aef-904e-cfe9a4c4ab46", 00:09:54.732 "is_configured": true, 00:09:54.732 "data_offset": 0, 00:09:54.732 "data_size": 65536 00:09:54.732 } 00:09:54.732 ] 00:09:54.732 }' 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.732 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.992 [2024-09-30 12:27:06.878501] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.992 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.252 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.252 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.252 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.252 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.252 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.252 "name": "Existed_Raid", 00:09:55.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.253 "strip_size_kb": 64, 00:09:55.253 "state": "configuring", 00:09:55.253 "raid_level": "concat", 00:09:55.253 "superblock": false, 00:09:55.253 "num_base_bdevs": 3, 00:09:55.253 "num_base_bdevs_discovered": 1, 00:09:55.253 "num_base_bdevs_operational": 3, 00:09:55.253 "base_bdevs_list": [ 00:09:55.253 { 00:09:55.253 "name": "BaseBdev1", 00:09:55.253 "uuid": "0c84bee4-f5b0-4cc7-981b-04673a93fe5d", 00:09:55.253 "is_configured": true, 00:09:55.253 "data_offset": 0, 00:09:55.253 "data_size": 65536 00:09:55.253 }, 00:09:55.253 { 00:09:55.253 "name": null, 00:09:55.253 "uuid": "54805eac-aeef-4f1f-b75c-477edd5ef234", 00:09:55.253 "is_configured": false, 00:09:55.253 "data_offset": 0, 00:09:55.253 "data_size": 65536 00:09:55.253 }, 00:09:55.253 { 00:09:55.253 "name": null, 00:09:55.253 "uuid": "fdc0fc36-7d46-4aef-904e-cfe9a4c4ab46", 00:09:55.253 "is_configured": false, 00:09:55.253 "data_offset": 0, 00:09:55.253 "data_size": 65536 00:09:55.253 } 00:09:55.253 ] 00:09:55.253 }' 00:09:55.253 12:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.253 12:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.512 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:55.512 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.512 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.513 [2024-09-30 12:27:07.365634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.513 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.772 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.772 "name": "Existed_Raid", 00:09:55.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.772 "strip_size_kb": 64, 00:09:55.772 "state": "configuring", 00:09:55.772 "raid_level": "concat", 00:09:55.772 "superblock": false, 00:09:55.772 "num_base_bdevs": 3, 00:09:55.772 "num_base_bdevs_discovered": 2, 00:09:55.772 "num_base_bdevs_operational": 3, 00:09:55.772 "base_bdevs_list": [ 00:09:55.772 { 00:09:55.772 "name": "BaseBdev1", 00:09:55.772 "uuid": "0c84bee4-f5b0-4cc7-981b-04673a93fe5d", 00:09:55.772 "is_configured": true, 00:09:55.772 "data_offset": 0, 00:09:55.772 "data_size": 65536 00:09:55.772 }, 00:09:55.772 { 00:09:55.772 "name": null, 00:09:55.772 "uuid": "54805eac-aeef-4f1f-b75c-477edd5ef234", 00:09:55.772 "is_configured": false, 00:09:55.772 "data_offset": 0, 00:09:55.772 "data_size": 65536 00:09:55.772 }, 00:09:55.772 { 00:09:55.772 "name": "BaseBdev3", 00:09:55.772 "uuid": "fdc0fc36-7d46-4aef-904e-cfe9a4c4ab46", 00:09:55.772 "is_configured": true, 00:09:55.772 "data_offset": 0, 00:09:55.772 "data_size": 65536 00:09:55.772 } 00:09:55.772 ] 00:09:55.772 }' 00:09:55.773 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.773 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.032 [2024-09-30 12:27:07.816915] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.032 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.291 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.291 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.291 "name": "Existed_Raid", 00:09:56.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.291 "strip_size_kb": 64, 00:09:56.291 "state": "configuring", 00:09:56.291 "raid_level": "concat", 00:09:56.291 "superblock": false, 00:09:56.291 "num_base_bdevs": 3, 00:09:56.291 "num_base_bdevs_discovered": 1, 00:09:56.292 "num_base_bdevs_operational": 3, 00:09:56.292 "base_bdevs_list": [ 00:09:56.292 { 00:09:56.292 "name": null, 00:09:56.292 "uuid": "0c84bee4-f5b0-4cc7-981b-04673a93fe5d", 00:09:56.292 "is_configured": false, 00:09:56.292 "data_offset": 0, 00:09:56.292 "data_size": 65536 00:09:56.292 }, 00:09:56.292 { 00:09:56.292 "name": null, 00:09:56.292 "uuid": "54805eac-aeef-4f1f-b75c-477edd5ef234", 00:09:56.292 "is_configured": false, 00:09:56.292 "data_offset": 0, 00:09:56.292 "data_size": 65536 00:09:56.292 }, 00:09:56.292 { 00:09:56.292 "name": "BaseBdev3", 00:09:56.292 "uuid": "fdc0fc36-7d46-4aef-904e-cfe9a4c4ab46", 00:09:56.292 "is_configured": true, 00:09:56.292 "data_offset": 0, 00:09:56.292 "data_size": 65536 00:09:56.292 } 00:09:56.292 ] 00:09:56.292 }' 00:09:56.292 12:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.292 12:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.551 [2024-09-30 12:27:08.406601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.551 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.811 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.811 "name": "Existed_Raid", 00:09:56.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.811 "strip_size_kb": 64, 00:09:56.811 "state": "configuring", 00:09:56.811 "raid_level": "concat", 00:09:56.811 "superblock": false, 00:09:56.811 "num_base_bdevs": 3, 00:09:56.811 "num_base_bdevs_discovered": 2, 00:09:56.811 "num_base_bdevs_operational": 3, 00:09:56.811 "base_bdevs_list": [ 00:09:56.811 { 00:09:56.811 "name": null, 00:09:56.811 "uuid": "0c84bee4-f5b0-4cc7-981b-04673a93fe5d", 00:09:56.811 "is_configured": false, 00:09:56.811 "data_offset": 0, 00:09:56.811 "data_size": 65536 00:09:56.811 }, 00:09:56.811 { 00:09:56.811 "name": "BaseBdev2", 00:09:56.811 "uuid": "54805eac-aeef-4f1f-b75c-477edd5ef234", 00:09:56.811 "is_configured": true, 00:09:56.811 "data_offset": 0, 00:09:56.811 "data_size": 65536 00:09:56.811 }, 00:09:56.811 { 00:09:56.811 "name": "BaseBdev3", 00:09:56.811 "uuid": "fdc0fc36-7d46-4aef-904e-cfe9a4c4ab46", 00:09:56.811 "is_configured": true, 00:09:56.811 "data_offset": 0, 00:09:56.811 "data_size": 65536 00:09:56.811 } 00:09:56.811 ] 00:09:56.811 }' 00:09:56.811 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.811 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0c84bee4-f5b0-4cc7-981b-04673a93fe5d 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.070 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.330 [2024-09-30 12:27:08.990277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:57.330 [2024-09-30 12:27:08.990380] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:57.330 [2024-09-30 12:27:08.990411] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:57.330 [2024-09-30 12:27:08.990731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:57.330 [2024-09-30 12:27:08.990954] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:57.330 [2024-09-30 12:27:08.991000] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:57.330 [2024-09-30 12:27:08.991306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.330 NewBaseBdev 00:09:57.330 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.330 12:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:57.330 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:57.330 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.330 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:57.330 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.330 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.330 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.330 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.330 12:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.330 [ 00:09:57.330 { 00:09:57.330 "name": "NewBaseBdev", 00:09:57.330 "aliases": [ 00:09:57.330 "0c84bee4-f5b0-4cc7-981b-04673a93fe5d" 00:09:57.330 ], 00:09:57.330 "product_name": "Malloc disk", 00:09:57.330 "block_size": 512, 00:09:57.330 "num_blocks": 65536, 00:09:57.330 "uuid": "0c84bee4-f5b0-4cc7-981b-04673a93fe5d", 00:09:57.330 "assigned_rate_limits": { 00:09:57.330 "rw_ios_per_sec": 0, 00:09:57.330 "rw_mbytes_per_sec": 0, 00:09:57.330 "r_mbytes_per_sec": 0, 00:09:57.330 "w_mbytes_per_sec": 0 00:09:57.330 }, 00:09:57.330 "claimed": true, 00:09:57.330 "claim_type": "exclusive_write", 00:09:57.330 "zoned": false, 00:09:57.330 "supported_io_types": { 00:09:57.330 "read": true, 00:09:57.330 "write": true, 00:09:57.330 "unmap": true, 00:09:57.330 "flush": true, 00:09:57.330 "reset": true, 00:09:57.330 "nvme_admin": false, 00:09:57.330 "nvme_io": false, 00:09:57.330 "nvme_io_md": false, 00:09:57.330 "write_zeroes": true, 00:09:57.330 "zcopy": true, 00:09:57.330 "get_zone_info": false, 00:09:57.330 "zone_management": false, 00:09:57.330 "zone_append": false, 00:09:57.330 "compare": false, 00:09:57.330 "compare_and_write": false, 00:09:57.330 "abort": true, 00:09:57.330 "seek_hole": false, 00:09:57.330 "seek_data": false, 00:09:57.330 "copy": true, 00:09:57.330 "nvme_iov_md": false 00:09:57.330 }, 00:09:57.330 "memory_domains": [ 00:09:57.330 { 00:09:57.330 "dma_device_id": "system", 00:09:57.330 "dma_device_type": 1 00:09:57.330 }, 00:09:57.330 { 00:09:57.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.330 "dma_device_type": 2 00:09:57.330 } 00:09:57.330 ], 00:09:57.330 "driver_specific": {} 00:09:57.330 } 00:09:57.330 ] 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.330 "name": "Existed_Raid", 00:09:57.330 "uuid": "eb0a39a8-baeb-48b5-b98a-4978f6c3e212", 00:09:57.330 "strip_size_kb": 64, 00:09:57.330 "state": "online", 00:09:57.330 "raid_level": "concat", 00:09:57.330 "superblock": false, 00:09:57.330 "num_base_bdevs": 3, 00:09:57.330 "num_base_bdevs_discovered": 3, 00:09:57.330 "num_base_bdevs_operational": 3, 00:09:57.330 "base_bdevs_list": [ 00:09:57.330 { 00:09:57.330 "name": "NewBaseBdev", 00:09:57.330 "uuid": "0c84bee4-f5b0-4cc7-981b-04673a93fe5d", 00:09:57.330 "is_configured": true, 00:09:57.330 "data_offset": 0, 00:09:57.330 "data_size": 65536 00:09:57.330 }, 00:09:57.330 { 00:09:57.330 "name": "BaseBdev2", 00:09:57.330 "uuid": "54805eac-aeef-4f1f-b75c-477edd5ef234", 00:09:57.330 "is_configured": true, 00:09:57.330 "data_offset": 0, 00:09:57.330 "data_size": 65536 00:09:57.330 }, 00:09:57.330 { 00:09:57.330 "name": "BaseBdev3", 00:09:57.330 "uuid": "fdc0fc36-7d46-4aef-904e-cfe9a4c4ab46", 00:09:57.330 "is_configured": true, 00:09:57.330 "data_offset": 0, 00:09:57.330 "data_size": 65536 00:09:57.330 } 00:09:57.330 ] 00:09:57.330 }' 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.330 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.589 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.589 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.589 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.589 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.589 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.589 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.589 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.589 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.589 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.589 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.589 [2024-09-30 12:27:09.457831] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.589 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.848 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.848 "name": "Existed_Raid", 00:09:57.848 "aliases": [ 00:09:57.848 "eb0a39a8-baeb-48b5-b98a-4978f6c3e212" 00:09:57.848 ], 00:09:57.848 "product_name": "Raid Volume", 00:09:57.848 "block_size": 512, 00:09:57.848 "num_blocks": 196608, 00:09:57.848 "uuid": "eb0a39a8-baeb-48b5-b98a-4978f6c3e212", 00:09:57.848 "assigned_rate_limits": { 00:09:57.848 "rw_ios_per_sec": 0, 00:09:57.848 "rw_mbytes_per_sec": 0, 00:09:57.848 "r_mbytes_per_sec": 0, 00:09:57.848 "w_mbytes_per_sec": 0 00:09:57.848 }, 00:09:57.848 "claimed": false, 00:09:57.848 "zoned": false, 00:09:57.848 "supported_io_types": { 00:09:57.848 "read": true, 00:09:57.848 "write": true, 00:09:57.848 "unmap": true, 00:09:57.848 "flush": true, 00:09:57.848 "reset": true, 00:09:57.848 "nvme_admin": false, 00:09:57.848 "nvme_io": false, 00:09:57.848 "nvme_io_md": false, 00:09:57.848 "write_zeroes": true, 00:09:57.848 "zcopy": false, 00:09:57.848 "get_zone_info": false, 00:09:57.848 "zone_management": false, 00:09:57.848 "zone_append": false, 00:09:57.848 "compare": false, 00:09:57.848 "compare_and_write": false, 00:09:57.848 "abort": false, 00:09:57.848 "seek_hole": false, 00:09:57.848 "seek_data": false, 00:09:57.848 "copy": false, 00:09:57.848 "nvme_iov_md": false 00:09:57.848 }, 00:09:57.848 "memory_domains": [ 00:09:57.848 { 00:09:57.848 "dma_device_id": "system", 00:09:57.848 "dma_device_type": 1 00:09:57.848 }, 00:09:57.848 { 00:09:57.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.848 "dma_device_type": 2 00:09:57.848 }, 00:09:57.848 { 00:09:57.848 "dma_device_id": "system", 00:09:57.848 "dma_device_type": 1 00:09:57.848 }, 00:09:57.848 { 00:09:57.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.848 "dma_device_type": 2 00:09:57.848 }, 00:09:57.848 { 00:09:57.848 "dma_device_id": "system", 00:09:57.848 "dma_device_type": 1 00:09:57.848 }, 00:09:57.848 { 00:09:57.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.848 "dma_device_type": 2 00:09:57.848 } 00:09:57.848 ], 00:09:57.848 "driver_specific": { 00:09:57.848 "raid": { 00:09:57.848 "uuid": "eb0a39a8-baeb-48b5-b98a-4978f6c3e212", 00:09:57.848 "strip_size_kb": 64, 00:09:57.848 "state": "online", 00:09:57.848 "raid_level": "concat", 00:09:57.848 "superblock": false, 00:09:57.848 "num_base_bdevs": 3, 00:09:57.848 "num_base_bdevs_discovered": 3, 00:09:57.848 "num_base_bdevs_operational": 3, 00:09:57.848 "base_bdevs_list": [ 00:09:57.848 { 00:09:57.848 "name": "NewBaseBdev", 00:09:57.848 "uuid": "0c84bee4-f5b0-4cc7-981b-04673a93fe5d", 00:09:57.848 "is_configured": true, 00:09:57.848 "data_offset": 0, 00:09:57.848 "data_size": 65536 00:09:57.848 }, 00:09:57.848 { 00:09:57.848 "name": "BaseBdev2", 00:09:57.848 "uuid": "54805eac-aeef-4f1f-b75c-477edd5ef234", 00:09:57.848 "is_configured": true, 00:09:57.848 "data_offset": 0, 00:09:57.848 "data_size": 65536 00:09:57.848 }, 00:09:57.848 { 00:09:57.848 "name": "BaseBdev3", 00:09:57.848 "uuid": "fdc0fc36-7d46-4aef-904e-cfe9a4c4ab46", 00:09:57.849 "is_configured": true, 00:09:57.849 "data_offset": 0, 00:09:57.849 "data_size": 65536 00:09:57.849 } 00:09:57.849 ] 00:09:57.849 } 00:09:57.849 } 00:09:57.849 }' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:57.849 BaseBdev2 00:09:57.849 BaseBdev3' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.849 [2024-09-30 12:27:09.733023] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.849 [2024-09-30 12:27:09.733101] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.849 [2024-09-30 12:27:09.733183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.849 [2024-09-30 12:27:09.733257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.849 [2024-09-30 12:27:09.733271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65499 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65499 ']' 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65499 00:09:57.849 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:58.108 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.108 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65499 00:09:58.108 killing process with pid 65499 00:09:58.108 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.108 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.108 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65499' 00:09:58.108 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65499 00:09:58.108 [2024-09-30 12:27:09.770851] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.108 12:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65499 00:09:58.368 [2024-09-30 12:27:10.063645] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:59.748 00:09:59.748 real 0m10.521s 00:09:59.748 user 0m16.589s 00:09:59.748 sys 0m1.825s 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.748 ************************************ 00:09:59.748 END TEST raid_state_function_test 00:09:59.748 ************************************ 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.748 12:27:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:59.748 12:27:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:59.748 12:27:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.748 12:27:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.748 ************************************ 00:09:59.748 START TEST raid_state_function_test_sb 00:09:59.748 ************************************ 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:59.748 Process raid pid: 66120 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66120 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66120' 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66120 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66120 ']' 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.748 12:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.748 [2024-09-30 12:27:11.456074] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:59.748 [2024-09-30 12:27:11.456300] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.748 [2024-09-30 12:27:11.627699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.008 [2024-09-30 12:27:11.827493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.266 [2024-09-30 12:27:12.016795] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.266 [2024-09-30 12:27:12.016922] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.526 [2024-09-30 12:27:12.276973] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.526 [2024-09-30 12:27:12.277104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.526 [2024-09-30 12:27:12.277173] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.526 [2024-09-30 12:27:12.277213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.526 [2024-09-30 12:27:12.277242] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.526 [2024-09-30 12:27:12.277306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.526 "name": "Existed_Raid", 00:10:00.526 "uuid": "60d3b2f0-e199-4680-85e3-3396a6119834", 00:10:00.526 "strip_size_kb": 64, 00:10:00.526 "state": "configuring", 00:10:00.526 "raid_level": "concat", 00:10:00.526 "superblock": true, 00:10:00.526 "num_base_bdevs": 3, 00:10:00.526 "num_base_bdevs_discovered": 0, 00:10:00.526 "num_base_bdevs_operational": 3, 00:10:00.526 "base_bdevs_list": [ 00:10:00.526 { 00:10:00.526 "name": "BaseBdev1", 00:10:00.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.526 "is_configured": false, 00:10:00.526 "data_offset": 0, 00:10:00.526 "data_size": 0 00:10:00.526 }, 00:10:00.526 { 00:10:00.526 "name": "BaseBdev2", 00:10:00.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.526 "is_configured": false, 00:10:00.526 "data_offset": 0, 00:10:00.526 "data_size": 0 00:10:00.526 }, 00:10:00.526 { 00:10:00.526 "name": "BaseBdev3", 00:10:00.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.526 "is_configured": false, 00:10:00.526 "data_offset": 0, 00:10:00.526 "data_size": 0 00:10:00.526 } 00:10:00.526 ] 00:10:00.526 }' 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.526 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.095 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.095 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.095 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.095 [2024-09-30 12:27:12.740069] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.095 [2024-09-30 12:27:12.740162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:01.095 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.095 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:01.095 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.095 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.095 [2024-09-30 12:27:12.748084] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.095 [2024-09-30 12:27:12.748173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.095 [2024-09-30 12:27:12.748205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.095 [2024-09-30 12:27:12.748234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.096 [2024-09-30 12:27:12.748257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.096 [2024-09-30 12:27:12.748299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.096 [2024-09-30 12:27:12.803202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.096 BaseBdev1 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.096 [ 00:10:01.096 { 00:10:01.096 "name": "BaseBdev1", 00:10:01.096 "aliases": [ 00:10:01.096 "36d240d5-b7b9-4283-80e5-a9bdb8dd88f5" 00:10:01.096 ], 00:10:01.096 "product_name": "Malloc disk", 00:10:01.096 "block_size": 512, 00:10:01.096 "num_blocks": 65536, 00:10:01.096 "uuid": "36d240d5-b7b9-4283-80e5-a9bdb8dd88f5", 00:10:01.096 "assigned_rate_limits": { 00:10:01.096 "rw_ios_per_sec": 0, 00:10:01.096 "rw_mbytes_per_sec": 0, 00:10:01.096 "r_mbytes_per_sec": 0, 00:10:01.096 "w_mbytes_per_sec": 0 00:10:01.096 }, 00:10:01.096 "claimed": true, 00:10:01.096 "claim_type": "exclusive_write", 00:10:01.096 "zoned": false, 00:10:01.096 "supported_io_types": { 00:10:01.096 "read": true, 00:10:01.096 "write": true, 00:10:01.096 "unmap": true, 00:10:01.096 "flush": true, 00:10:01.096 "reset": true, 00:10:01.096 "nvme_admin": false, 00:10:01.096 "nvme_io": false, 00:10:01.096 "nvme_io_md": false, 00:10:01.096 "write_zeroes": true, 00:10:01.096 "zcopy": true, 00:10:01.096 "get_zone_info": false, 00:10:01.096 "zone_management": false, 00:10:01.096 "zone_append": false, 00:10:01.096 "compare": false, 00:10:01.096 "compare_and_write": false, 00:10:01.096 "abort": true, 00:10:01.096 "seek_hole": false, 00:10:01.096 "seek_data": false, 00:10:01.096 "copy": true, 00:10:01.096 "nvme_iov_md": false 00:10:01.096 }, 00:10:01.096 "memory_domains": [ 00:10:01.096 { 00:10:01.096 "dma_device_id": "system", 00:10:01.096 "dma_device_type": 1 00:10:01.096 }, 00:10:01.096 { 00:10:01.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.096 "dma_device_type": 2 00:10:01.096 } 00:10:01.096 ], 00:10:01.096 "driver_specific": {} 00:10:01.096 } 00:10:01.096 ] 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.096 "name": "Existed_Raid", 00:10:01.096 "uuid": "1e567919-8197-48bc-be10-50b6a207bd3d", 00:10:01.096 "strip_size_kb": 64, 00:10:01.096 "state": "configuring", 00:10:01.096 "raid_level": "concat", 00:10:01.096 "superblock": true, 00:10:01.096 "num_base_bdevs": 3, 00:10:01.096 "num_base_bdevs_discovered": 1, 00:10:01.096 "num_base_bdevs_operational": 3, 00:10:01.096 "base_bdevs_list": [ 00:10:01.096 { 00:10:01.096 "name": "BaseBdev1", 00:10:01.096 "uuid": "36d240d5-b7b9-4283-80e5-a9bdb8dd88f5", 00:10:01.096 "is_configured": true, 00:10:01.096 "data_offset": 2048, 00:10:01.096 "data_size": 63488 00:10:01.096 }, 00:10:01.096 { 00:10:01.096 "name": "BaseBdev2", 00:10:01.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.096 "is_configured": false, 00:10:01.096 "data_offset": 0, 00:10:01.096 "data_size": 0 00:10:01.096 }, 00:10:01.096 { 00:10:01.096 "name": "BaseBdev3", 00:10:01.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.096 "is_configured": false, 00:10:01.096 "data_offset": 0, 00:10:01.096 "data_size": 0 00:10:01.096 } 00:10:01.096 ] 00:10:01.096 }' 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.096 12:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.356 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.356 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.356 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.356 [2024-09-30 12:27:13.226508] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.356 [2024-09-30 12:27:13.226612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:01.356 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.356 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:01.356 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.356 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.356 [2024-09-30 12:27:13.234543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.357 [2024-09-30 12:27:13.236435] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.357 [2024-09-30 12:27:13.236555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.357 [2024-09-30 12:27:13.236607] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.357 [2024-09-30 12:27:13.236636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.357 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.619 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.619 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.619 "name": "Existed_Raid", 00:10:01.619 "uuid": "279782a1-d948-4121-837a-13845e28c55a", 00:10:01.619 "strip_size_kb": 64, 00:10:01.619 "state": "configuring", 00:10:01.619 "raid_level": "concat", 00:10:01.619 "superblock": true, 00:10:01.619 "num_base_bdevs": 3, 00:10:01.619 "num_base_bdevs_discovered": 1, 00:10:01.619 "num_base_bdevs_operational": 3, 00:10:01.619 "base_bdevs_list": [ 00:10:01.619 { 00:10:01.619 "name": "BaseBdev1", 00:10:01.619 "uuid": "36d240d5-b7b9-4283-80e5-a9bdb8dd88f5", 00:10:01.619 "is_configured": true, 00:10:01.619 "data_offset": 2048, 00:10:01.619 "data_size": 63488 00:10:01.619 }, 00:10:01.619 { 00:10:01.619 "name": "BaseBdev2", 00:10:01.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.619 "is_configured": false, 00:10:01.619 "data_offset": 0, 00:10:01.619 "data_size": 0 00:10:01.619 }, 00:10:01.619 { 00:10:01.619 "name": "BaseBdev3", 00:10:01.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.619 "is_configured": false, 00:10:01.619 "data_offset": 0, 00:10:01.619 "data_size": 0 00:10:01.619 } 00:10:01.619 ] 00:10:01.619 }' 00:10:01.619 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.619 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.958 [2024-09-30 12:27:13.708135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.958 BaseBdev2 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.958 [ 00:10:01.958 { 00:10:01.958 "name": "BaseBdev2", 00:10:01.958 "aliases": [ 00:10:01.958 "bab1614e-0e60-4b63-8264-850c20f7b628" 00:10:01.958 ], 00:10:01.958 "product_name": "Malloc disk", 00:10:01.958 "block_size": 512, 00:10:01.958 "num_blocks": 65536, 00:10:01.958 "uuid": "bab1614e-0e60-4b63-8264-850c20f7b628", 00:10:01.958 "assigned_rate_limits": { 00:10:01.958 "rw_ios_per_sec": 0, 00:10:01.958 "rw_mbytes_per_sec": 0, 00:10:01.958 "r_mbytes_per_sec": 0, 00:10:01.958 "w_mbytes_per_sec": 0 00:10:01.958 }, 00:10:01.958 "claimed": true, 00:10:01.958 "claim_type": "exclusive_write", 00:10:01.958 "zoned": false, 00:10:01.958 "supported_io_types": { 00:10:01.958 "read": true, 00:10:01.958 "write": true, 00:10:01.958 "unmap": true, 00:10:01.958 "flush": true, 00:10:01.958 "reset": true, 00:10:01.958 "nvme_admin": false, 00:10:01.958 "nvme_io": false, 00:10:01.958 "nvme_io_md": false, 00:10:01.958 "write_zeroes": true, 00:10:01.958 "zcopy": true, 00:10:01.958 "get_zone_info": false, 00:10:01.958 "zone_management": false, 00:10:01.958 "zone_append": false, 00:10:01.958 "compare": false, 00:10:01.958 "compare_and_write": false, 00:10:01.958 "abort": true, 00:10:01.958 "seek_hole": false, 00:10:01.958 "seek_data": false, 00:10:01.958 "copy": true, 00:10:01.958 "nvme_iov_md": false 00:10:01.958 }, 00:10:01.958 "memory_domains": [ 00:10:01.958 { 00:10:01.958 "dma_device_id": "system", 00:10:01.958 "dma_device_type": 1 00:10:01.958 }, 00:10:01.958 { 00:10:01.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.958 "dma_device_type": 2 00:10:01.958 } 00:10:01.958 ], 00:10:01.958 "driver_specific": {} 00:10:01.958 } 00:10:01.958 ] 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.958 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.959 "name": "Existed_Raid", 00:10:01.959 "uuid": "279782a1-d948-4121-837a-13845e28c55a", 00:10:01.959 "strip_size_kb": 64, 00:10:01.959 "state": "configuring", 00:10:01.959 "raid_level": "concat", 00:10:01.959 "superblock": true, 00:10:01.959 "num_base_bdevs": 3, 00:10:01.959 "num_base_bdevs_discovered": 2, 00:10:01.959 "num_base_bdevs_operational": 3, 00:10:01.959 "base_bdevs_list": [ 00:10:01.959 { 00:10:01.959 "name": "BaseBdev1", 00:10:01.959 "uuid": "36d240d5-b7b9-4283-80e5-a9bdb8dd88f5", 00:10:01.959 "is_configured": true, 00:10:01.959 "data_offset": 2048, 00:10:01.959 "data_size": 63488 00:10:01.959 }, 00:10:01.959 { 00:10:01.959 "name": "BaseBdev2", 00:10:01.959 "uuid": "bab1614e-0e60-4b63-8264-850c20f7b628", 00:10:01.959 "is_configured": true, 00:10:01.959 "data_offset": 2048, 00:10:01.959 "data_size": 63488 00:10:01.959 }, 00:10:01.959 { 00:10:01.959 "name": "BaseBdev3", 00:10:01.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.959 "is_configured": false, 00:10:01.959 "data_offset": 0, 00:10:01.959 "data_size": 0 00:10:01.959 } 00:10:01.959 ] 00:10:01.959 }' 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.959 12:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.528 [2024-09-30 12:27:14.209946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.528 [2024-09-30 12:27:14.210228] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:02.528 [2024-09-30 12:27:14.210254] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:02.528 [2024-09-30 12:27:14.210500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:02.528 [2024-09-30 12:27:14.210653] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:02.528 BaseBdev3 00:10:02.528 [2024-09-30 12:27:14.210665] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:02.528 [2024-09-30 12:27:14.210818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.528 [ 00:10:02.528 { 00:10:02.528 "name": "BaseBdev3", 00:10:02.528 "aliases": [ 00:10:02.528 "b1354ec3-12cd-4be4-b620-36e0266163c0" 00:10:02.528 ], 00:10:02.528 "product_name": "Malloc disk", 00:10:02.528 "block_size": 512, 00:10:02.528 "num_blocks": 65536, 00:10:02.528 "uuid": "b1354ec3-12cd-4be4-b620-36e0266163c0", 00:10:02.528 "assigned_rate_limits": { 00:10:02.528 "rw_ios_per_sec": 0, 00:10:02.528 "rw_mbytes_per_sec": 0, 00:10:02.528 "r_mbytes_per_sec": 0, 00:10:02.528 "w_mbytes_per_sec": 0 00:10:02.528 }, 00:10:02.528 "claimed": true, 00:10:02.528 "claim_type": "exclusive_write", 00:10:02.528 "zoned": false, 00:10:02.528 "supported_io_types": { 00:10:02.528 "read": true, 00:10:02.528 "write": true, 00:10:02.528 "unmap": true, 00:10:02.528 "flush": true, 00:10:02.528 "reset": true, 00:10:02.528 "nvme_admin": false, 00:10:02.528 "nvme_io": false, 00:10:02.528 "nvme_io_md": false, 00:10:02.528 "write_zeroes": true, 00:10:02.528 "zcopy": true, 00:10:02.528 "get_zone_info": false, 00:10:02.528 "zone_management": false, 00:10:02.528 "zone_append": false, 00:10:02.528 "compare": false, 00:10:02.528 "compare_and_write": false, 00:10:02.528 "abort": true, 00:10:02.528 "seek_hole": false, 00:10:02.528 "seek_data": false, 00:10:02.528 "copy": true, 00:10:02.528 "nvme_iov_md": false 00:10:02.528 }, 00:10:02.528 "memory_domains": [ 00:10:02.528 { 00:10:02.528 "dma_device_id": "system", 00:10:02.528 "dma_device_type": 1 00:10:02.528 }, 00:10:02.528 { 00:10:02.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.528 "dma_device_type": 2 00:10:02.528 } 00:10:02.528 ], 00:10:02.528 "driver_specific": {} 00:10:02.528 } 00:10:02.528 ] 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.528 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.529 "name": "Existed_Raid", 00:10:02.529 "uuid": "279782a1-d948-4121-837a-13845e28c55a", 00:10:02.529 "strip_size_kb": 64, 00:10:02.529 "state": "online", 00:10:02.529 "raid_level": "concat", 00:10:02.529 "superblock": true, 00:10:02.529 "num_base_bdevs": 3, 00:10:02.529 "num_base_bdevs_discovered": 3, 00:10:02.529 "num_base_bdevs_operational": 3, 00:10:02.529 "base_bdevs_list": [ 00:10:02.529 { 00:10:02.529 "name": "BaseBdev1", 00:10:02.529 "uuid": "36d240d5-b7b9-4283-80e5-a9bdb8dd88f5", 00:10:02.529 "is_configured": true, 00:10:02.529 "data_offset": 2048, 00:10:02.529 "data_size": 63488 00:10:02.529 }, 00:10:02.529 { 00:10:02.529 "name": "BaseBdev2", 00:10:02.529 "uuid": "bab1614e-0e60-4b63-8264-850c20f7b628", 00:10:02.529 "is_configured": true, 00:10:02.529 "data_offset": 2048, 00:10:02.529 "data_size": 63488 00:10:02.529 }, 00:10:02.529 { 00:10:02.529 "name": "BaseBdev3", 00:10:02.529 "uuid": "b1354ec3-12cd-4be4-b620-36e0266163c0", 00:10:02.529 "is_configured": true, 00:10:02.529 "data_offset": 2048, 00:10:02.529 "data_size": 63488 00:10:02.529 } 00:10:02.529 ] 00:10:02.529 }' 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.529 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.097 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:03.097 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:03.097 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.097 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.097 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.097 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.097 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.097 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:03.097 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.098 [2024-09-30 12:27:14.705472] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.098 "name": "Existed_Raid", 00:10:03.098 "aliases": [ 00:10:03.098 "279782a1-d948-4121-837a-13845e28c55a" 00:10:03.098 ], 00:10:03.098 "product_name": "Raid Volume", 00:10:03.098 "block_size": 512, 00:10:03.098 "num_blocks": 190464, 00:10:03.098 "uuid": "279782a1-d948-4121-837a-13845e28c55a", 00:10:03.098 "assigned_rate_limits": { 00:10:03.098 "rw_ios_per_sec": 0, 00:10:03.098 "rw_mbytes_per_sec": 0, 00:10:03.098 "r_mbytes_per_sec": 0, 00:10:03.098 "w_mbytes_per_sec": 0 00:10:03.098 }, 00:10:03.098 "claimed": false, 00:10:03.098 "zoned": false, 00:10:03.098 "supported_io_types": { 00:10:03.098 "read": true, 00:10:03.098 "write": true, 00:10:03.098 "unmap": true, 00:10:03.098 "flush": true, 00:10:03.098 "reset": true, 00:10:03.098 "nvme_admin": false, 00:10:03.098 "nvme_io": false, 00:10:03.098 "nvme_io_md": false, 00:10:03.098 "write_zeroes": true, 00:10:03.098 "zcopy": false, 00:10:03.098 "get_zone_info": false, 00:10:03.098 "zone_management": false, 00:10:03.098 "zone_append": false, 00:10:03.098 "compare": false, 00:10:03.098 "compare_and_write": false, 00:10:03.098 "abort": false, 00:10:03.098 "seek_hole": false, 00:10:03.098 "seek_data": false, 00:10:03.098 "copy": false, 00:10:03.098 "nvme_iov_md": false 00:10:03.098 }, 00:10:03.098 "memory_domains": [ 00:10:03.098 { 00:10:03.098 "dma_device_id": "system", 00:10:03.098 "dma_device_type": 1 00:10:03.098 }, 00:10:03.098 { 00:10:03.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.098 "dma_device_type": 2 00:10:03.098 }, 00:10:03.098 { 00:10:03.098 "dma_device_id": "system", 00:10:03.098 "dma_device_type": 1 00:10:03.098 }, 00:10:03.098 { 00:10:03.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.098 "dma_device_type": 2 00:10:03.098 }, 00:10:03.098 { 00:10:03.098 "dma_device_id": "system", 00:10:03.098 "dma_device_type": 1 00:10:03.098 }, 00:10:03.098 { 00:10:03.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.098 "dma_device_type": 2 00:10:03.098 } 00:10:03.098 ], 00:10:03.098 "driver_specific": { 00:10:03.098 "raid": { 00:10:03.098 "uuid": "279782a1-d948-4121-837a-13845e28c55a", 00:10:03.098 "strip_size_kb": 64, 00:10:03.098 "state": "online", 00:10:03.098 "raid_level": "concat", 00:10:03.098 "superblock": true, 00:10:03.098 "num_base_bdevs": 3, 00:10:03.098 "num_base_bdevs_discovered": 3, 00:10:03.098 "num_base_bdevs_operational": 3, 00:10:03.098 "base_bdevs_list": [ 00:10:03.098 { 00:10:03.098 "name": "BaseBdev1", 00:10:03.098 "uuid": "36d240d5-b7b9-4283-80e5-a9bdb8dd88f5", 00:10:03.098 "is_configured": true, 00:10:03.098 "data_offset": 2048, 00:10:03.098 "data_size": 63488 00:10:03.098 }, 00:10:03.098 { 00:10:03.098 "name": "BaseBdev2", 00:10:03.098 "uuid": "bab1614e-0e60-4b63-8264-850c20f7b628", 00:10:03.098 "is_configured": true, 00:10:03.098 "data_offset": 2048, 00:10:03.098 "data_size": 63488 00:10:03.098 }, 00:10:03.098 { 00:10:03.098 "name": "BaseBdev3", 00:10:03.098 "uuid": "b1354ec3-12cd-4be4-b620-36e0266163c0", 00:10:03.098 "is_configured": true, 00:10:03.098 "data_offset": 2048, 00:10:03.098 "data_size": 63488 00:10:03.098 } 00:10:03.098 ] 00:10:03.098 } 00:10:03.098 } 00:10:03.098 }' 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:03.098 BaseBdev2 00:10:03.098 BaseBdev3' 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.098 12:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.357 [2024-09-30 12:27:15.012623] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.357 [2024-09-30 12:27:15.012657] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.357 [2024-09-30 12:27:15.012709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.357 "name": "Existed_Raid", 00:10:03.357 "uuid": "279782a1-d948-4121-837a-13845e28c55a", 00:10:03.357 "strip_size_kb": 64, 00:10:03.357 "state": "offline", 00:10:03.357 "raid_level": "concat", 00:10:03.357 "superblock": true, 00:10:03.357 "num_base_bdevs": 3, 00:10:03.357 "num_base_bdevs_discovered": 2, 00:10:03.357 "num_base_bdevs_operational": 2, 00:10:03.357 "base_bdevs_list": [ 00:10:03.357 { 00:10:03.357 "name": null, 00:10:03.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.357 "is_configured": false, 00:10:03.357 "data_offset": 0, 00:10:03.357 "data_size": 63488 00:10:03.357 }, 00:10:03.357 { 00:10:03.357 "name": "BaseBdev2", 00:10:03.357 "uuid": "bab1614e-0e60-4b63-8264-850c20f7b628", 00:10:03.357 "is_configured": true, 00:10:03.357 "data_offset": 2048, 00:10:03.357 "data_size": 63488 00:10:03.357 }, 00:10:03.357 { 00:10:03.357 "name": "BaseBdev3", 00:10:03.357 "uuid": "b1354ec3-12cd-4be4-b620-36e0266163c0", 00:10:03.357 "is_configured": true, 00:10:03.357 "data_offset": 2048, 00:10:03.357 "data_size": 63488 00:10:03.357 } 00:10:03.357 ] 00:10:03.357 }' 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.357 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.926 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:03.926 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.926 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:03.926 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.927 [2024-09-30 12:27:15.608913] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.927 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.927 [2024-09-30 12:27:15.761816] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:03.927 [2024-09-30 12:27:15.761931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.187 BaseBdev2 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.187 [ 00:10:04.187 { 00:10:04.187 "name": "BaseBdev2", 00:10:04.187 "aliases": [ 00:10:04.187 "3341c3a4-97b3-4bf5-a6bc-45cd1d1fd726" 00:10:04.187 ], 00:10:04.187 "product_name": "Malloc disk", 00:10:04.187 "block_size": 512, 00:10:04.187 "num_blocks": 65536, 00:10:04.187 "uuid": "3341c3a4-97b3-4bf5-a6bc-45cd1d1fd726", 00:10:04.187 "assigned_rate_limits": { 00:10:04.187 "rw_ios_per_sec": 0, 00:10:04.187 "rw_mbytes_per_sec": 0, 00:10:04.187 "r_mbytes_per_sec": 0, 00:10:04.187 "w_mbytes_per_sec": 0 00:10:04.187 }, 00:10:04.187 "claimed": false, 00:10:04.187 "zoned": false, 00:10:04.187 "supported_io_types": { 00:10:04.187 "read": true, 00:10:04.187 "write": true, 00:10:04.187 "unmap": true, 00:10:04.187 "flush": true, 00:10:04.187 "reset": true, 00:10:04.187 "nvme_admin": false, 00:10:04.187 "nvme_io": false, 00:10:04.187 "nvme_io_md": false, 00:10:04.187 "write_zeroes": true, 00:10:04.187 "zcopy": true, 00:10:04.187 "get_zone_info": false, 00:10:04.187 "zone_management": false, 00:10:04.187 "zone_append": false, 00:10:04.187 "compare": false, 00:10:04.187 "compare_and_write": false, 00:10:04.187 "abort": true, 00:10:04.187 "seek_hole": false, 00:10:04.187 "seek_data": false, 00:10:04.187 "copy": true, 00:10:04.187 "nvme_iov_md": false 00:10:04.187 }, 00:10:04.187 "memory_domains": [ 00:10:04.187 { 00:10:04.187 "dma_device_id": "system", 00:10:04.187 "dma_device_type": 1 00:10:04.187 }, 00:10:04.187 { 00:10:04.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.187 "dma_device_type": 2 00:10:04.187 } 00:10:04.187 ], 00:10:04.187 "driver_specific": {} 00:10:04.187 } 00:10:04.187 ] 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.187 12:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.187 BaseBdev3 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.187 [ 00:10:04.187 { 00:10:04.187 "name": "BaseBdev3", 00:10:04.187 "aliases": [ 00:10:04.187 "252bc578-330d-4f03-9a75-18f7080a42ee" 00:10:04.187 ], 00:10:04.187 "product_name": "Malloc disk", 00:10:04.187 "block_size": 512, 00:10:04.187 "num_blocks": 65536, 00:10:04.187 "uuid": "252bc578-330d-4f03-9a75-18f7080a42ee", 00:10:04.187 "assigned_rate_limits": { 00:10:04.187 "rw_ios_per_sec": 0, 00:10:04.187 "rw_mbytes_per_sec": 0, 00:10:04.187 "r_mbytes_per_sec": 0, 00:10:04.187 "w_mbytes_per_sec": 0 00:10:04.187 }, 00:10:04.187 "claimed": false, 00:10:04.187 "zoned": false, 00:10:04.187 "supported_io_types": { 00:10:04.187 "read": true, 00:10:04.187 "write": true, 00:10:04.187 "unmap": true, 00:10:04.187 "flush": true, 00:10:04.187 "reset": true, 00:10:04.187 "nvme_admin": false, 00:10:04.187 "nvme_io": false, 00:10:04.187 "nvme_io_md": false, 00:10:04.187 "write_zeroes": true, 00:10:04.187 "zcopy": true, 00:10:04.187 "get_zone_info": false, 00:10:04.187 "zone_management": false, 00:10:04.187 "zone_append": false, 00:10:04.187 "compare": false, 00:10:04.187 "compare_and_write": false, 00:10:04.187 "abort": true, 00:10:04.187 "seek_hole": false, 00:10:04.187 "seek_data": false, 00:10:04.187 "copy": true, 00:10:04.187 "nvme_iov_md": false 00:10:04.187 }, 00:10:04.187 "memory_domains": [ 00:10:04.187 { 00:10:04.187 "dma_device_id": "system", 00:10:04.187 "dma_device_type": 1 00:10:04.187 }, 00:10:04.187 { 00:10:04.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.187 "dma_device_type": 2 00:10:04.187 } 00:10:04.187 ], 00:10:04.187 "driver_specific": {} 00:10:04.187 } 00:10:04.187 ] 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.187 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.188 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.188 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.188 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:04.188 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.188 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.188 [2024-09-30 12:27:16.074690] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.188 [2024-09-30 12:27:16.074751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.188 [2024-09-30 12:27:16.074791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.188 [2024-09-30 12:27:16.076531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.447 "name": "Existed_Raid", 00:10:04.447 "uuid": "d70f58da-e6ea-49ab-a356-ad1c5213247e", 00:10:04.447 "strip_size_kb": 64, 00:10:04.447 "state": "configuring", 00:10:04.447 "raid_level": "concat", 00:10:04.447 "superblock": true, 00:10:04.447 "num_base_bdevs": 3, 00:10:04.447 "num_base_bdevs_discovered": 2, 00:10:04.447 "num_base_bdevs_operational": 3, 00:10:04.447 "base_bdevs_list": [ 00:10:04.447 { 00:10:04.447 "name": "BaseBdev1", 00:10:04.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.447 "is_configured": false, 00:10:04.447 "data_offset": 0, 00:10:04.447 "data_size": 0 00:10:04.447 }, 00:10:04.447 { 00:10:04.447 "name": "BaseBdev2", 00:10:04.447 "uuid": "3341c3a4-97b3-4bf5-a6bc-45cd1d1fd726", 00:10:04.447 "is_configured": true, 00:10:04.447 "data_offset": 2048, 00:10:04.447 "data_size": 63488 00:10:04.447 }, 00:10:04.447 { 00:10:04.447 "name": "BaseBdev3", 00:10:04.447 "uuid": "252bc578-330d-4f03-9a75-18f7080a42ee", 00:10:04.447 "is_configured": true, 00:10:04.447 "data_offset": 2048, 00:10:04.447 "data_size": 63488 00:10:04.447 } 00:10:04.447 ] 00:10:04.447 }' 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.447 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.707 [2024-09-30 12:27:16.509906] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.707 "name": "Existed_Raid", 00:10:04.707 "uuid": "d70f58da-e6ea-49ab-a356-ad1c5213247e", 00:10:04.707 "strip_size_kb": 64, 00:10:04.707 "state": "configuring", 00:10:04.707 "raid_level": "concat", 00:10:04.707 "superblock": true, 00:10:04.707 "num_base_bdevs": 3, 00:10:04.707 "num_base_bdevs_discovered": 1, 00:10:04.707 "num_base_bdevs_operational": 3, 00:10:04.707 "base_bdevs_list": [ 00:10:04.707 { 00:10:04.707 "name": "BaseBdev1", 00:10:04.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.707 "is_configured": false, 00:10:04.707 "data_offset": 0, 00:10:04.707 "data_size": 0 00:10:04.707 }, 00:10:04.707 { 00:10:04.707 "name": null, 00:10:04.707 "uuid": "3341c3a4-97b3-4bf5-a6bc-45cd1d1fd726", 00:10:04.707 "is_configured": false, 00:10:04.707 "data_offset": 0, 00:10:04.707 "data_size": 63488 00:10:04.707 }, 00:10:04.707 { 00:10:04.707 "name": "BaseBdev3", 00:10:04.707 "uuid": "252bc578-330d-4f03-9a75-18f7080a42ee", 00:10:04.707 "is_configured": true, 00:10:04.707 "data_offset": 2048, 00:10:04.707 "data_size": 63488 00:10:04.707 } 00:10:04.707 ] 00:10:04.707 }' 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.707 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.277 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.277 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.277 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.277 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.277 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.277 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:05.277 12:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.277 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.277 12:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.277 [2024-09-30 12:27:17.032787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.277 BaseBdev1 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.277 [ 00:10:05.277 { 00:10:05.277 "name": "BaseBdev1", 00:10:05.277 "aliases": [ 00:10:05.277 "7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3" 00:10:05.277 ], 00:10:05.277 "product_name": "Malloc disk", 00:10:05.277 "block_size": 512, 00:10:05.277 "num_blocks": 65536, 00:10:05.277 "uuid": "7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3", 00:10:05.277 "assigned_rate_limits": { 00:10:05.277 "rw_ios_per_sec": 0, 00:10:05.277 "rw_mbytes_per_sec": 0, 00:10:05.277 "r_mbytes_per_sec": 0, 00:10:05.277 "w_mbytes_per_sec": 0 00:10:05.277 }, 00:10:05.277 "claimed": true, 00:10:05.277 "claim_type": "exclusive_write", 00:10:05.277 "zoned": false, 00:10:05.277 "supported_io_types": { 00:10:05.277 "read": true, 00:10:05.277 "write": true, 00:10:05.277 "unmap": true, 00:10:05.277 "flush": true, 00:10:05.277 "reset": true, 00:10:05.277 "nvme_admin": false, 00:10:05.277 "nvme_io": false, 00:10:05.277 "nvme_io_md": false, 00:10:05.277 "write_zeroes": true, 00:10:05.277 "zcopy": true, 00:10:05.277 "get_zone_info": false, 00:10:05.277 "zone_management": false, 00:10:05.277 "zone_append": false, 00:10:05.277 "compare": false, 00:10:05.277 "compare_and_write": false, 00:10:05.277 "abort": true, 00:10:05.277 "seek_hole": false, 00:10:05.277 "seek_data": false, 00:10:05.277 "copy": true, 00:10:05.277 "nvme_iov_md": false 00:10:05.277 }, 00:10:05.277 "memory_domains": [ 00:10:05.277 { 00:10:05.277 "dma_device_id": "system", 00:10:05.277 "dma_device_type": 1 00:10:05.277 }, 00:10:05.277 { 00:10:05.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.277 "dma_device_type": 2 00:10:05.277 } 00:10:05.277 ], 00:10:05.277 "driver_specific": {} 00:10:05.277 } 00:10:05.277 ] 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.277 "name": "Existed_Raid", 00:10:05.277 "uuid": "d70f58da-e6ea-49ab-a356-ad1c5213247e", 00:10:05.277 "strip_size_kb": 64, 00:10:05.277 "state": "configuring", 00:10:05.277 "raid_level": "concat", 00:10:05.277 "superblock": true, 00:10:05.277 "num_base_bdevs": 3, 00:10:05.277 "num_base_bdevs_discovered": 2, 00:10:05.277 "num_base_bdevs_operational": 3, 00:10:05.277 "base_bdevs_list": [ 00:10:05.277 { 00:10:05.277 "name": "BaseBdev1", 00:10:05.277 "uuid": "7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3", 00:10:05.277 "is_configured": true, 00:10:05.277 "data_offset": 2048, 00:10:05.277 "data_size": 63488 00:10:05.277 }, 00:10:05.277 { 00:10:05.277 "name": null, 00:10:05.277 "uuid": "3341c3a4-97b3-4bf5-a6bc-45cd1d1fd726", 00:10:05.277 "is_configured": false, 00:10:05.277 "data_offset": 0, 00:10:05.277 "data_size": 63488 00:10:05.277 }, 00:10:05.277 { 00:10:05.277 "name": "BaseBdev3", 00:10:05.277 "uuid": "252bc578-330d-4f03-9a75-18f7080a42ee", 00:10:05.277 "is_configured": true, 00:10:05.277 "data_offset": 2048, 00:10:05.277 "data_size": 63488 00:10:05.277 } 00:10:05.277 ] 00:10:05.277 }' 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.277 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.846 [2024-09-30 12:27:17.551917] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.846 "name": "Existed_Raid", 00:10:05.846 "uuid": "d70f58da-e6ea-49ab-a356-ad1c5213247e", 00:10:05.846 "strip_size_kb": 64, 00:10:05.846 "state": "configuring", 00:10:05.846 "raid_level": "concat", 00:10:05.846 "superblock": true, 00:10:05.846 "num_base_bdevs": 3, 00:10:05.846 "num_base_bdevs_discovered": 1, 00:10:05.846 "num_base_bdevs_operational": 3, 00:10:05.846 "base_bdevs_list": [ 00:10:05.846 { 00:10:05.846 "name": "BaseBdev1", 00:10:05.846 "uuid": "7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3", 00:10:05.846 "is_configured": true, 00:10:05.846 "data_offset": 2048, 00:10:05.846 "data_size": 63488 00:10:05.846 }, 00:10:05.846 { 00:10:05.846 "name": null, 00:10:05.846 "uuid": "3341c3a4-97b3-4bf5-a6bc-45cd1d1fd726", 00:10:05.846 "is_configured": false, 00:10:05.846 "data_offset": 0, 00:10:05.846 "data_size": 63488 00:10:05.846 }, 00:10:05.846 { 00:10:05.846 "name": null, 00:10:05.846 "uuid": "252bc578-330d-4f03-9a75-18f7080a42ee", 00:10:05.846 "is_configured": false, 00:10:05.846 "data_offset": 0, 00:10:05.846 "data_size": 63488 00:10:05.846 } 00:10:05.846 ] 00:10:05.846 }' 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.846 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.105 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.105 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.105 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.105 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:06.106 12:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.106 12:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:06.106 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.365 [2024-09-30 12:27:18.007149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.365 "name": "Existed_Raid", 00:10:06.365 "uuid": "d70f58da-e6ea-49ab-a356-ad1c5213247e", 00:10:06.365 "strip_size_kb": 64, 00:10:06.365 "state": "configuring", 00:10:06.365 "raid_level": "concat", 00:10:06.365 "superblock": true, 00:10:06.365 "num_base_bdevs": 3, 00:10:06.365 "num_base_bdevs_discovered": 2, 00:10:06.365 "num_base_bdevs_operational": 3, 00:10:06.365 "base_bdevs_list": [ 00:10:06.365 { 00:10:06.365 "name": "BaseBdev1", 00:10:06.365 "uuid": "7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3", 00:10:06.365 "is_configured": true, 00:10:06.365 "data_offset": 2048, 00:10:06.365 "data_size": 63488 00:10:06.365 }, 00:10:06.365 { 00:10:06.365 "name": null, 00:10:06.365 "uuid": "3341c3a4-97b3-4bf5-a6bc-45cd1d1fd726", 00:10:06.365 "is_configured": false, 00:10:06.365 "data_offset": 0, 00:10:06.365 "data_size": 63488 00:10:06.365 }, 00:10:06.365 { 00:10:06.365 "name": "BaseBdev3", 00:10:06.365 "uuid": "252bc578-330d-4f03-9a75-18f7080a42ee", 00:10:06.365 "is_configured": true, 00:10:06.365 "data_offset": 2048, 00:10:06.365 "data_size": 63488 00:10:06.365 } 00:10:06.365 ] 00:10:06.365 }' 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.365 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.625 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.625 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:06.625 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.625 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.625 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.625 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:06.625 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:06.625 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.625 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.625 [2024-09-30 12:27:18.510370] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.884 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.884 "name": "Existed_Raid", 00:10:06.884 "uuid": "d70f58da-e6ea-49ab-a356-ad1c5213247e", 00:10:06.884 "strip_size_kb": 64, 00:10:06.884 "state": "configuring", 00:10:06.884 "raid_level": "concat", 00:10:06.884 "superblock": true, 00:10:06.884 "num_base_bdevs": 3, 00:10:06.884 "num_base_bdevs_discovered": 1, 00:10:06.884 "num_base_bdevs_operational": 3, 00:10:06.884 "base_bdevs_list": [ 00:10:06.884 { 00:10:06.884 "name": null, 00:10:06.884 "uuid": "7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3", 00:10:06.884 "is_configured": false, 00:10:06.884 "data_offset": 0, 00:10:06.884 "data_size": 63488 00:10:06.884 }, 00:10:06.885 { 00:10:06.885 "name": null, 00:10:06.885 "uuid": "3341c3a4-97b3-4bf5-a6bc-45cd1d1fd726", 00:10:06.885 "is_configured": false, 00:10:06.885 "data_offset": 0, 00:10:06.885 "data_size": 63488 00:10:06.885 }, 00:10:06.885 { 00:10:06.885 "name": "BaseBdev3", 00:10:06.885 "uuid": "252bc578-330d-4f03-9a75-18f7080a42ee", 00:10:06.885 "is_configured": true, 00:10:06.885 "data_offset": 2048, 00:10:06.885 "data_size": 63488 00:10:06.885 } 00:10:06.885 ] 00:10:06.885 }' 00:10:06.885 12:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.885 12:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.143 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.143 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.143 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.143 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.402 [2024-09-30 12:27:19.076277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.402 "name": "Existed_Raid", 00:10:07.402 "uuid": "d70f58da-e6ea-49ab-a356-ad1c5213247e", 00:10:07.402 "strip_size_kb": 64, 00:10:07.402 "state": "configuring", 00:10:07.402 "raid_level": "concat", 00:10:07.402 "superblock": true, 00:10:07.402 "num_base_bdevs": 3, 00:10:07.402 "num_base_bdevs_discovered": 2, 00:10:07.402 "num_base_bdevs_operational": 3, 00:10:07.402 "base_bdevs_list": [ 00:10:07.402 { 00:10:07.402 "name": null, 00:10:07.402 "uuid": "7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3", 00:10:07.402 "is_configured": false, 00:10:07.402 "data_offset": 0, 00:10:07.402 "data_size": 63488 00:10:07.402 }, 00:10:07.402 { 00:10:07.402 "name": "BaseBdev2", 00:10:07.402 "uuid": "3341c3a4-97b3-4bf5-a6bc-45cd1d1fd726", 00:10:07.402 "is_configured": true, 00:10:07.402 "data_offset": 2048, 00:10:07.402 "data_size": 63488 00:10:07.402 }, 00:10:07.402 { 00:10:07.402 "name": "BaseBdev3", 00:10:07.402 "uuid": "252bc578-330d-4f03-9a75-18f7080a42ee", 00:10:07.402 "is_configured": true, 00:10:07.402 "data_offset": 2048, 00:10:07.402 "data_size": 63488 00:10:07.402 } 00:10:07.402 ] 00:10:07.402 }' 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.402 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.662 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.662 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.662 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.662 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.662 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.922 [2024-09-30 12:27:19.636134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:07.922 [2024-09-30 12:27:19.636502] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:07.922 [2024-09-30 12:27:19.636564] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:07.922 [2024-09-30 12:27:19.636878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:07.922 [2024-09-30 12:27:19.637070] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:07.922 NewBaseBdev 00:10:07.922 [2024-09-30 12:27:19.637115] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:07.922 [2024-09-30 12:27:19.637270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.922 [ 00:10:07.922 { 00:10:07.922 "name": "NewBaseBdev", 00:10:07.922 "aliases": [ 00:10:07.922 "7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3" 00:10:07.922 ], 00:10:07.922 "product_name": "Malloc disk", 00:10:07.922 "block_size": 512, 00:10:07.922 "num_blocks": 65536, 00:10:07.922 "uuid": "7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3", 00:10:07.922 "assigned_rate_limits": { 00:10:07.922 "rw_ios_per_sec": 0, 00:10:07.922 "rw_mbytes_per_sec": 0, 00:10:07.922 "r_mbytes_per_sec": 0, 00:10:07.922 "w_mbytes_per_sec": 0 00:10:07.922 }, 00:10:07.922 "claimed": true, 00:10:07.922 "claim_type": "exclusive_write", 00:10:07.922 "zoned": false, 00:10:07.922 "supported_io_types": { 00:10:07.922 "read": true, 00:10:07.922 "write": true, 00:10:07.922 "unmap": true, 00:10:07.922 "flush": true, 00:10:07.922 "reset": true, 00:10:07.922 "nvme_admin": false, 00:10:07.922 "nvme_io": false, 00:10:07.922 "nvme_io_md": false, 00:10:07.922 "write_zeroes": true, 00:10:07.922 "zcopy": true, 00:10:07.922 "get_zone_info": false, 00:10:07.922 "zone_management": false, 00:10:07.922 "zone_append": false, 00:10:07.922 "compare": false, 00:10:07.922 "compare_and_write": false, 00:10:07.922 "abort": true, 00:10:07.922 "seek_hole": false, 00:10:07.922 "seek_data": false, 00:10:07.922 "copy": true, 00:10:07.922 "nvme_iov_md": false 00:10:07.922 }, 00:10:07.922 "memory_domains": [ 00:10:07.922 { 00:10:07.922 "dma_device_id": "system", 00:10:07.922 "dma_device_type": 1 00:10:07.922 }, 00:10:07.922 { 00:10:07.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.922 "dma_device_type": 2 00:10:07.922 } 00:10:07.922 ], 00:10:07.922 "driver_specific": {} 00:10:07.922 } 00:10:07.922 ] 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.922 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.923 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.923 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.923 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.923 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.923 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.923 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.923 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.923 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.923 "name": "Existed_Raid", 00:10:07.923 "uuid": "d70f58da-e6ea-49ab-a356-ad1c5213247e", 00:10:07.923 "strip_size_kb": 64, 00:10:07.923 "state": "online", 00:10:07.923 "raid_level": "concat", 00:10:07.923 "superblock": true, 00:10:07.923 "num_base_bdevs": 3, 00:10:07.923 "num_base_bdevs_discovered": 3, 00:10:07.923 "num_base_bdevs_operational": 3, 00:10:07.923 "base_bdevs_list": [ 00:10:07.923 { 00:10:07.923 "name": "NewBaseBdev", 00:10:07.923 "uuid": "7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3", 00:10:07.923 "is_configured": true, 00:10:07.923 "data_offset": 2048, 00:10:07.923 "data_size": 63488 00:10:07.923 }, 00:10:07.923 { 00:10:07.923 "name": "BaseBdev2", 00:10:07.923 "uuid": "3341c3a4-97b3-4bf5-a6bc-45cd1d1fd726", 00:10:07.923 "is_configured": true, 00:10:07.923 "data_offset": 2048, 00:10:07.923 "data_size": 63488 00:10:07.923 }, 00:10:07.923 { 00:10:07.923 "name": "BaseBdev3", 00:10:07.923 "uuid": "252bc578-330d-4f03-9a75-18f7080a42ee", 00:10:07.923 "is_configured": true, 00:10:07.923 "data_offset": 2048, 00:10:07.923 "data_size": 63488 00:10:07.923 } 00:10:07.923 ] 00:10:07.923 }' 00:10:07.923 12:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.923 12:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.492 [2024-09-30 12:27:20.147846] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.492 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.492 "name": "Existed_Raid", 00:10:08.492 "aliases": [ 00:10:08.492 "d70f58da-e6ea-49ab-a356-ad1c5213247e" 00:10:08.492 ], 00:10:08.492 "product_name": "Raid Volume", 00:10:08.492 "block_size": 512, 00:10:08.492 "num_blocks": 190464, 00:10:08.492 "uuid": "d70f58da-e6ea-49ab-a356-ad1c5213247e", 00:10:08.492 "assigned_rate_limits": { 00:10:08.492 "rw_ios_per_sec": 0, 00:10:08.492 "rw_mbytes_per_sec": 0, 00:10:08.492 "r_mbytes_per_sec": 0, 00:10:08.492 "w_mbytes_per_sec": 0 00:10:08.492 }, 00:10:08.492 "claimed": false, 00:10:08.492 "zoned": false, 00:10:08.492 "supported_io_types": { 00:10:08.492 "read": true, 00:10:08.492 "write": true, 00:10:08.492 "unmap": true, 00:10:08.492 "flush": true, 00:10:08.492 "reset": true, 00:10:08.492 "nvme_admin": false, 00:10:08.492 "nvme_io": false, 00:10:08.492 "nvme_io_md": false, 00:10:08.492 "write_zeroes": true, 00:10:08.492 "zcopy": false, 00:10:08.492 "get_zone_info": false, 00:10:08.492 "zone_management": false, 00:10:08.492 "zone_append": false, 00:10:08.492 "compare": false, 00:10:08.492 "compare_and_write": false, 00:10:08.492 "abort": false, 00:10:08.492 "seek_hole": false, 00:10:08.492 "seek_data": false, 00:10:08.492 "copy": false, 00:10:08.492 "nvme_iov_md": false 00:10:08.492 }, 00:10:08.492 "memory_domains": [ 00:10:08.492 { 00:10:08.492 "dma_device_id": "system", 00:10:08.492 "dma_device_type": 1 00:10:08.492 }, 00:10:08.492 { 00:10:08.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.492 "dma_device_type": 2 00:10:08.492 }, 00:10:08.492 { 00:10:08.492 "dma_device_id": "system", 00:10:08.492 "dma_device_type": 1 00:10:08.492 }, 00:10:08.492 { 00:10:08.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.492 "dma_device_type": 2 00:10:08.492 }, 00:10:08.492 { 00:10:08.492 "dma_device_id": "system", 00:10:08.492 "dma_device_type": 1 00:10:08.492 }, 00:10:08.492 { 00:10:08.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.492 "dma_device_type": 2 00:10:08.492 } 00:10:08.492 ], 00:10:08.492 "driver_specific": { 00:10:08.492 "raid": { 00:10:08.492 "uuid": "d70f58da-e6ea-49ab-a356-ad1c5213247e", 00:10:08.492 "strip_size_kb": 64, 00:10:08.492 "state": "online", 00:10:08.492 "raid_level": "concat", 00:10:08.492 "superblock": true, 00:10:08.492 "num_base_bdevs": 3, 00:10:08.492 "num_base_bdevs_discovered": 3, 00:10:08.492 "num_base_bdevs_operational": 3, 00:10:08.492 "base_bdevs_list": [ 00:10:08.492 { 00:10:08.492 "name": "NewBaseBdev", 00:10:08.492 "uuid": "7a3ec1b7-e4d1-4686-8b5e-a82bfed846d3", 00:10:08.492 "is_configured": true, 00:10:08.492 "data_offset": 2048, 00:10:08.492 "data_size": 63488 00:10:08.492 }, 00:10:08.492 { 00:10:08.492 "name": "BaseBdev2", 00:10:08.492 "uuid": "3341c3a4-97b3-4bf5-a6bc-45cd1d1fd726", 00:10:08.492 "is_configured": true, 00:10:08.492 "data_offset": 2048, 00:10:08.492 "data_size": 63488 00:10:08.492 }, 00:10:08.492 { 00:10:08.492 "name": "BaseBdev3", 00:10:08.492 "uuid": "252bc578-330d-4f03-9a75-18f7080a42ee", 00:10:08.492 "is_configured": true, 00:10:08.492 "data_offset": 2048, 00:10:08.492 "data_size": 63488 00:10:08.492 } 00:10:08.492 ] 00:10:08.492 } 00:10:08.492 } 00:10:08.492 }' 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:08.493 BaseBdev2 00:10:08.493 BaseBdev3' 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.493 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.753 [2024-09-30 12:27:20.391494] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.753 [2024-09-30 12:27:20.391526] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.753 [2024-09-30 12:27:20.391602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.753 [2024-09-30 12:27:20.391659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.753 [2024-09-30 12:27:20.391673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66120 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66120 ']' 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66120 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66120 00:10:08.753 killing process with pid 66120 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66120' 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66120 00:10:08.753 [2024-09-30 12:27:20.442023] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.753 12:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66120 00:10:09.013 [2024-09-30 12:27:20.727458] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.393 12:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:10.393 00:10:10.393 real 0m10.594s 00:10:10.393 user 0m16.735s 00:10:10.393 sys 0m1.891s 00:10:10.393 12:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.393 ************************************ 00:10:10.393 END TEST raid_state_function_test_sb 00:10:10.393 ************************************ 00:10:10.393 12:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.393 12:27:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:10.393 12:27:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:10.393 12:27:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.393 12:27:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.393 ************************************ 00:10:10.393 START TEST raid_superblock_test 00:10:10.393 ************************************ 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66746 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66746 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66746 ']' 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.393 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.393 [2024-09-30 12:27:22.112171] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:10.394 [2024-09-30 12:27:22.112384] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66746 ] 00:10:10.394 [2024-09-30 12:27:22.258171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.653 [2024-09-30 12:27:22.460595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.912 [2024-09-30 12:27:22.660924] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.912 [2024-09-30 12:27:22.660986] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.172 malloc1 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.172 [2024-09-30 12:27:22.983292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:11.172 [2024-09-30 12:27:22.983422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.172 [2024-09-30 12:27:22.983473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:11.172 [2024-09-30 12:27:22.983529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.172 [2024-09-30 12:27:22.985643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.172 [2024-09-30 12:27:22.985729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:11.172 pt1 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.172 12:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.172 malloc2 00:10:11.172 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.172 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.172 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.172 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.172 [2024-09-30 12:27:23.051464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.172 [2024-09-30 12:27:23.051579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.172 [2024-09-30 12:27:23.051625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:11.172 [2024-09-30 12:27:23.051662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.172 [2024-09-30 12:27:23.053691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.172 [2024-09-30 12:27:23.053789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.172 pt2 00:10:11.172 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.172 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.173 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.173 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:11.173 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:11.173 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:11.173 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:11.173 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:11.173 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:11.173 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:11.173 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.173 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.432 malloc3 00:10:11.432 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.432 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:11.432 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.432 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.432 [2024-09-30 12:27:23.105521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:11.432 [2024-09-30 12:27:23.105625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.432 [2024-09-30 12:27:23.105667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:11.432 [2024-09-30 12:27:23.105705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.432 [2024-09-30 12:27:23.107824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.432 [2024-09-30 12:27:23.107903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:11.432 pt3 00:10:11.432 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.432 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.432 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.432 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.433 [2024-09-30 12:27:23.117574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.433 [2024-09-30 12:27:23.119452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.433 [2024-09-30 12:27:23.119536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:11.433 [2024-09-30 12:27:23.119690] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:11.433 [2024-09-30 12:27:23.119705] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:11.433 [2024-09-30 12:27:23.119966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:11.433 [2024-09-30 12:27:23.120153] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:11.433 [2024-09-30 12:27:23.120171] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:11.433 [2024-09-30 12:27:23.120321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.433 "name": "raid_bdev1", 00:10:11.433 "uuid": "98b42be9-e71c-4c54-8f7e-6a7405bff174", 00:10:11.433 "strip_size_kb": 64, 00:10:11.433 "state": "online", 00:10:11.433 "raid_level": "concat", 00:10:11.433 "superblock": true, 00:10:11.433 "num_base_bdevs": 3, 00:10:11.433 "num_base_bdevs_discovered": 3, 00:10:11.433 "num_base_bdevs_operational": 3, 00:10:11.433 "base_bdevs_list": [ 00:10:11.433 { 00:10:11.433 "name": "pt1", 00:10:11.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.433 "is_configured": true, 00:10:11.433 "data_offset": 2048, 00:10:11.433 "data_size": 63488 00:10:11.433 }, 00:10:11.433 { 00:10:11.433 "name": "pt2", 00:10:11.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.433 "is_configured": true, 00:10:11.433 "data_offset": 2048, 00:10:11.433 "data_size": 63488 00:10:11.433 }, 00:10:11.433 { 00:10:11.433 "name": "pt3", 00:10:11.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.433 "is_configured": true, 00:10:11.433 "data_offset": 2048, 00:10:11.433 "data_size": 63488 00:10:11.433 } 00:10:11.433 ] 00:10:11.433 }' 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.433 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.693 [2024-09-30 12:27:23.553111] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.693 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.693 "name": "raid_bdev1", 00:10:11.693 "aliases": [ 00:10:11.693 "98b42be9-e71c-4c54-8f7e-6a7405bff174" 00:10:11.693 ], 00:10:11.693 "product_name": "Raid Volume", 00:10:11.693 "block_size": 512, 00:10:11.693 "num_blocks": 190464, 00:10:11.693 "uuid": "98b42be9-e71c-4c54-8f7e-6a7405bff174", 00:10:11.693 "assigned_rate_limits": { 00:10:11.693 "rw_ios_per_sec": 0, 00:10:11.693 "rw_mbytes_per_sec": 0, 00:10:11.693 "r_mbytes_per_sec": 0, 00:10:11.693 "w_mbytes_per_sec": 0 00:10:11.693 }, 00:10:11.693 "claimed": false, 00:10:11.693 "zoned": false, 00:10:11.693 "supported_io_types": { 00:10:11.693 "read": true, 00:10:11.693 "write": true, 00:10:11.693 "unmap": true, 00:10:11.693 "flush": true, 00:10:11.693 "reset": true, 00:10:11.693 "nvme_admin": false, 00:10:11.693 "nvme_io": false, 00:10:11.693 "nvme_io_md": false, 00:10:11.693 "write_zeroes": true, 00:10:11.693 "zcopy": false, 00:10:11.693 "get_zone_info": false, 00:10:11.693 "zone_management": false, 00:10:11.693 "zone_append": false, 00:10:11.693 "compare": false, 00:10:11.693 "compare_and_write": false, 00:10:11.693 "abort": false, 00:10:11.693 "seek_hole": false, 00:10:11.693 "seek_data": false, 00:10:11.693 "copy": false, 00:10:11.693 "nvme_iov_md": false 00:10:11.693 }, 00:10:11.693 "memory_domains": [ 00:10:11.693 { 00:10:11.693 "dma_device_id": "system", 00:10:11.693 "dma_device_type": 1 00:10:11.693 }, 00:10:11.693 { 00:10:11.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.693 "dma_device_type": 2 00:10:11.693 }, 00:10:11.693 { 00:10:11.693 "dma_device_id": "system", 00:10:11.693 "dma_device_type": 1 00:10:11.693 }, 00:10:11.693 { 00:10:11.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.693 "dma_device_type": 2 00:10:11.693 }, 00:10:11.693 { 00:10:11.693 "dma_device_id": "system", 00:10:11.693 "dma_device_type": 1 00:10:11.693 }, 00:10:11.693 { 00:10:11.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.693 "dma_device_type": 2 00:10:11.693 } 00:10:11.693 ], 00:10:11.693 "driver_specific": { 00:10:11.693 "raid": { 00:10:11.693 "uuid": "98b42be9-e71c-4c54-8f7e-6a7405bff174", 00:10:11.693 "strip_size_kb": 64, 00:10:11.693 "state": "online", 00:10:11.693 "raid_level": "concat", 00:10:11.693 "superblock": true, 00:10:11.693 "num_base_bdevs": 3, 00:10:11.693 "num_base_bdevs_discovered": 3, 00:10:11.693 "num_base_bdevs_operational": 3, 00:10:11.693 "base_bdevs_list": [ 00:10:11.693 { 00:10:11.693 "name": "pt1", 00:10:11.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.693 "is_configured": true, 00:10:11.693 "data_offset": 2048, 00:10:11.693 "data_size": 63488 00:10:11.693 }, 00:10:11.693 { 00:10:11.693 "name": "pt2", 00:10:11.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.693 "is_configured": true, 00:10:11.693 "data_offset": 2048, 00:10:11.693 "data_size": 63488 00:10:11.693 }, 00:10:11.693 { 00:10:11.694 "name": "pt3", 00:10:11.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.694 "is_configured": true, 00:10:11.694 "data_offset": 2048, 00:10:11.694 "data_size": 63488 00:10:11.694 } 00:10:11.694 ] 00:10:11.694 } 00:10:11.694 } 00:10:11.694 }' 00:10:11.694 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:11.953 pt2 00:10:11.953 pt3' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.953 [2024-09-30 12:27:23.776661] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=98b42be9-e71c-4c54-8f7e-6a7405bff174 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 98b42be9-e71c-4c54-8f7e-6a7405bff174 ']' 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.953 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.954 [2024-09-30 12:27:23.824293] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.954 [2024-09-30 12:27:23.824372] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.954 [2024-09-30 12:27:23.824449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.954 [2024-09-30 12:27:23.824532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.954 [2024-09-30 12:27:23.824557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:11.954 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.954 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.954 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.954 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.954 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:11.954 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.214 [2024-09-30 12:27:23.972079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:12.214 [2024-09-30 12:27:23.974019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:12.214 [2024-09-30 12:27:23.974124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:12.214 [2024-09-30 12:27:23.974208] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:12.214 [2024-09-30 12:27:23.974303] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:12.214 [2024-09-30 12:27:23.974365] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:12.214 [2024-09-30 12:27:23.974441] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.214 [2024-09-30 12:27:23.974452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:12.214 request: 00:10:12.214 { 00:10:12.214 "name": "raid_bdev1", 00:10:12.214 "raid_level": "concat", 00:10:12.214 "base_bdevs": [ 00:10:12.214 "malloc1", 00:10:12.214 "malloc2", 00:10:12.214 "malloc3" 00:10:12.214 ], 00:10:12.214 "strip_size_kb": 64, 00:10:12.214 "superblock": false, 00:10:12.214 "method": "bdev_raid_create", 00:10:12.214 "req_id": 1 00:10:12.214 } 00:10:12.214 Got JSON-RPC error response 00:10:12.214 response: 00:10:12.214 { 00:10:12.214 "code": -17, 00:10:12.214 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:12.214 } 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.214 12:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.214 [2024-09-30 12:27:24.035922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:12.214 [2024-09-30 12:27:24.036019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.214 [2024-09-30 12:27:24.036076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:12.214 [2024-09-30 12:27:24.036121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.214 [2024-09-30 12:27:24.038279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.214 [2024-09-30 12:27:24.038355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:12.214 [2024-09-30 12:27:24.038457] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:12.214 [2024-09-30 12:27:24.038534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:12.214 pt1 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.214 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.215 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.215 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.215 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.215 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.215 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.215 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.215 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.215 "name": "raid_bdev1", 00:10:12.215 "uuid": "98b42be9-e71c-4c54-8f7e-6a7405bff174", 00:10:12.215 "strip_size_kb": 64, 00:10:12.215 "state": "configuring", 00:10:12.215 "raid_level": "concat", 00:10:12.215 "superblock": true, 00:10:12.215 "num_base_bdevs": 3, 00:10:12.215 "num_base_bdevs_discovered": 1, 00:10:12.215 "num_base_bdevs_operational": 3, 00:10:12.215 "base_bdevs_list": [ 00:10:12.215 { 00:10:12.215 "name": "pt1", 00:10:12.215 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.215 "is_configured": true, 00:10:12.215 "data_offset": 2048, 00:10:12.215 "data_size": 63488 00:10:12.215 }, 00:10:12.215 { 00:10:12.215 "name": null, 00:10:12.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.215 "is_configured": false, 00:10:12.215 "data_offset": 2048, 00:10:12.215 "data_size": 63488 00:10:12.215 }, 00:10:12.215 { 00:10:12.215 "name": null, 00:10:12.215 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.215 "is_configured": false, 00:10:12.215 "data_offset": 2048, 00:10:12.215 "data_size": 63488 00:10:12.215 } 00:10:12.215 ] 00:10:12.215 }' 00:10:12.215 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.215 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.784 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:12.784 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.784 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.784 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.784 [2024-09-30 12:27:24.455321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.784 [2024-09-30 12:27:24.455467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.784 [2024-09-30 12:27:24.455516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:12.784 [2024-09-30 12:27:24.455552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.784 [2024-09-30 12:27:24.456019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.784 [2024-09-30 12:27:24.456085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.784 [2024-09-30 12:27:24.456202] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:12.784 [2024-09-30 12:27:24.456260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.784 pt2 00:10:12.784 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.784 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.785 [2024-09-30 12:27:24.463321] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.785 "name": "raid_bdev1", 00:10:12.785 "uuid": "98b42be9-e71c-4c54-8f7e-6a7405bff174", 00:10:12.785 "strip_size_kb": 64, 00:10:12.785 "state": "configuring", 00:10:12.785 "raid_level": "concat", 00:10:12.785 "superblock": true, 00:10:12.785 "num_base_bdevs": 3, 00:10:12.785 "num_base_bdevs_discovered": 1, 00:10:12.785 "num_base_bdevs_operational": 3, 00:10:12.785 "base_bdevs_list": [ 00:10:12.785 { 00:10:12.785 "name": "pt1", 00:10:12.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.785 "is_configured": true, 00:10:12.785 "data_offset": 2048, 00:10:12.785 "data_size": 63488 00:10:12.785 }, 00:10:12.785 { 00:10:12.785 "name": null, 00:10:12.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.785 "is_configured": false, 00:10:12.785 "data_offset": 0, 00:10:12.785 "data_size": 63488 00:10:12.785 }, 00:10:12.785 { 00:10:12.785 "name": null, 00:10:12.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.785 "is_configured": false, 00:10:12.785 "data_offset": 2048, 00:10:12.785 "data_size": 63488 00:10:12.785 } 00:10:12.785 ] 00:10:12.785 }' 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.785 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.045 [2024-09-30 12:27:24.886580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:13.045 [2024-09-30 12:27:24.886644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.045 [2024-09-30 12:27:24.886663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:13.045 [2024-09-30 12:27:24.886675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.045 [2024-09-30 12:27:24.887106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.045 [2024-09-30 12:27:24.887129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:13.045 [2024-09-30 12:27:24.887201] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:13.045 [2024-09-30 12:27:24.887237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.045 pt2 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.045 [2024-09-30 12:27:24.898589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:13.045 [2024-09-30 12:27:24.898642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.045 [2024-09-30 12:27:24.898658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:13.045 [2024-09-30 12:27:24.898669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.045 [2024-09-30 12:27:24.899058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.045 [2024-09-30 12:27:24.899090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:13.045 [2024-09-30 12:27:24.899149] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:13.045 [2024-09-30 12:27:24.899172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:13.045 [2024-09-30 12:27:24.899284] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:13.045 [2024-09-30 12:27:24.899296] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:13.045 [2024-09-30 12:27:24.899557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:13.045 [2024-09-30 12:27:24.899706] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:13.045 [2024-09-30 12:27:24.899715] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:13.045 [2024-09-30 12:27:24.899873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.045 pt3 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.045 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.304 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.304 "name": "raid_bdev1", 00:10:13.304 "uuid": "98b42be9-e71c-4c54-8f7e-6a7405bff174", 00:10:13.304 "strip_size_kb": 64, 00:10:13.304 "state": "online", 00:10:13.304 "raid_level": "concat", 00:10:13.304 "superblock": true, 00:10:13.304 "num_base_bdevs": 3, 00:10:13.304 "num_base_bdevs_discovered": 3, 00:10:13.304 "num_base_bdevs_operational": 3, 00:10:13.304 "base_bdevs_list": [ 00:10:13.304 { 00:10:13.304 "name": "pt1", 00:10:13.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.304 "is_configured": true, 00:10:13.304 "data_offset": 2048, 00:10:13.304 "data_size": 63488 00:10:13.304 }, 00:10:13.304 { 00:10:13.304 "name": "pt2", 00:10:13.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.304 "is_configured": true, 00:10:13.304 "data_offset": 2048, 00:10:13.304 "data_size": 63488 00:10:13.304 }, 00:10:13.304 { 00:10:13.304 "name": "pt3", 00:10:13.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.304 "is_configured": true, 00:10:13.304 "data_offset": 2048, 00:10:13.304 "data_size": 63488 00:10:13.304 } 00:10:13.304 ] 00:10:13.304 }' 00:10:13.304 12:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.304 12:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.564 [2024-09-30 12:27:25.322151] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.564 "name": "raid_bdev1", 00:10:13.564 "aliases": [ 00:10:13.564 "98b42be9-e71c-4c54-8f7e-6a7405bff174" 00:10:13.564 ], 00:10:13.564 "product_name": "Raid Volume", 00:10:13.564 "block_size": 512, 00:10:13.564 "num_blocks": 190464, 00:10:13.564 "uuid": "98b42be9-e71c-4c54-8f7e-6a7405bff174", 00:10:13.564 "assigned_rate_limits": { 00:10:13.564 "rw_ios_per_sec": 0, 00:10:13.564 "rw_mbytes_per_sec": 0, 00:10:13.564 "r_mbytes_per_sec": 0, 00:10:13.564 "w_mbytes_per_sec": 0 00:10:13.564 }, 00:10:13.564 "claimed": false, 00:10:13.564 "zoned": false, 00:10:13.564 "supported_io_types": { 00:10:13.564 "read": true, 00:10:13.564 "write": true, 00:10:13.564 "unmap": true, 00:10:13.564 "flush": true, 00:10:13.564 "reset": true, 00:10:13.564 "nvme_admin": false, 00:10:13.564 "nvme_io": false, 00:10:13.564 "nvme_io_md": false, 00:10:13.564 "write_zeroes": true, 00:10:13.564 "zcopy": false, 00:10:13.564 "get_zone_info": false, 00:10:13.564 "zone_management": false, 00:10:13.564 "zone_append": false, 00:10:13.564 "compare": false, 00:10:13.564 "compare_and_write": false, 00:10:13.564 "abort": false, 00:10:13.564 "seek_hole": false, 00:10:13.564 "seek_data": false, 00:10:13.564 "copy": false, 00:10:13.564 "nvme_iov_md": false 00:10:13.564 }, 00:10:13.564 "memory_domains": [ 00:10:13.564 { 00:10:13.564 "dma_device_id": "system", 00:10:13.564 "dma_device_type": 1 00:10:13.564 }, 00:10:13.564 { 00:10:13.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.564 "dma_device_type": 2 00:10:13.564 }, 00:10:13.564 { 00:10:13.564 "dma_device_id": "system", 00:10:13.564 "dma_device_type": 1 00:10:13.564 }, 00:10:13.564 { 00:10:13.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.564 "dma_device_type": 2 00:10:13.564 }, 00:10:13.564 { 00:10:13.564 "dma_device_id": "system", 00:10:13.564 "dma_device_type": 1 00:10:13.564 }, 00:10:13.564 { 00:10:13.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.564 "dma_device_type": 2 00:10:13.564 } 00:10:13.564 ], 00:10:13.564 "driver_specific": { 00:10:13.564 "raid": { 00:10:13.564 "uuid": "98b42be9-e71c-4c54-8f7e-6a7405bff174", 00:10:13.564 "strip_size_kb": 64, 00:10:13.564 "state": "online", 00:10:13.564 "raid_level": "concat", 00:10:13.564 "superblock": true, 00:10:13.564 "num_base_bdevs": 3, 00:10:13.564 "num_base_bdevs_discovered": 3, 00:10:13.564 "num_base_bdevs_operational": 3, 00:10:13.564 "base_bdevs_list": [ 00:10:13.564 { 00:10:13.564 "name": "pt1", 00:10:13.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.564 "is_configured": true, 00:10:13.564 "data_offset": 2048, 00:10:13.564 "data_size": 63488 00:10:13.564 }, 00:10:13.564 { 00:10:13.564 "name": "pt2", 00:10:13.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.564 "is_configured": true, 00:10:13.564 "data_offset": 2048, 00:10:13.564 "data_size": 63488 00:10:13.564 }, 00:10:13.564 { 00:10:13.564 "name": "pt3", 00:10:13.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.564 "is_configured": true, 00:10:13.564 "data_offset": 2048, 00:10:13.564 "data_size": 63488 00:10:13.564 } 00:10:13.564 ] 00:10:13.564 } 00:10:13.564 } 00:10:13.564 }' 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:13.564 pt2 00:10:13.564 pt3' 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.564 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:13.565 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.565 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.824 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:13.825 [2024-09-30 12:27:25.553732] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 98b42be9-e71c-4c54-8f7e-6a7405bff174 '!=' 98b42be9-e71c-4c54-8f7e-6a7405bff174 ']' 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66746 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66746 ']' 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66746 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66746 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66746' 00:10:13.825 killing process with pid 66746 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66746 00:10:13.825 [2024-09-30 12:27:25.635185] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.825 [2024-09-30 12:27:25.635329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.825 [2024-09-30 12:27:25.635426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.825 12:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66746 00:10:13.825 [2024-09-30 12:27:25.635485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:14.085 [2024-09-30 12:27:25.917415] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.465 12:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:15.465 00:10:15.465 real 0m5.108s 00:10:15.465 user 0m7.208s 00:10:15.465 sys 0m0.890s 00:10:15.465 12:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.465 12:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.465 ************************************ 00:10:15.465 END TEST raid_superblock_test 00:10:15.465 ************************************ 00:10:15.465 12:27:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:15.465 12:27:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:15.465 12:27:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.465 12:27:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.465 ************************************ 00:10:15.465 START TEST raid_read_error_test 00:10:15.465 ************************************ 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oWlNTufAg7 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66994 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66994 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 66994 ']' 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.465 12:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.465 [2024-09-30 12:27:27.321151] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:15.465 [2024-09-30 12:27:27.321271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66994 ] 00:10:15.725 [2024-09-30 12:27:27.489331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.985 [2024-09-30 12:27:27.692891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.244 [2024-09-30 12:27:27.882377] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.244 [2024-09-30 12:27:27.882440] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.244 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.244 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:16.244 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.244 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:16.244 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.244 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.504 BaseBdev1_malloc 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.504 true 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.504 [2024-09-30 12:27:28.177157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:16.504 [2024-09-30 12:27:28.177221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.504 [2024-09-30 12:27:28.177241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:16.504 [2024-09-30 12:27:28.177254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.504 [2024-09-30 12:27:28.179315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.504 [2024-09-30 12:27:28.179436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:16.504 BaseBdev1 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.504 BaseBdev2_malloc 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.504 true 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.504 [2024-09-30 12:27:28.255779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:16.504 [2024-09-30 12:27:28.255848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.504 [2024-09-30 12:27:28.255890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:16.504 [2024-09-30 12:27:28.255910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.504 [2024-09-30 12:27:28.258078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.504 [2024-09-30 12:27:28.258123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:16.504 BaseBdev2 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.504 BaseBdev3_malloc 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:16.504 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.505 true 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.505 [2024-09-30 12:27:28.321698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:16.505 [2024-09-30 12:27:28.321833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.505 [2024-09-30 12:27:28.321874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:16.505 [2024-09-30 12:27:28.321887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.505 [2024-09-30 12:27:28.324001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.505 [2024-09-30 12:27:28.324085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:16.505 BaseBdev3 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.505 [2024-09-30 12:27:28.333730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.505 [2024-09-30 12:27:28.335469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.505 [2024-09-30 12:27:28.335556] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.505 [2024-09-30 12:27:28.335773] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:16.505 [2024-09-30 12:27:28.335787] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:16.505 [2024-09-30 12:27:28.336031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:16.505 [2024-09-30 12:27:28.336204] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:16.505 [2024-09-30 12:27:28.336217] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:16.505 [2024-09-30 12:27:28.336371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.505 "name": "raid_bdev1", 00:10:16.505 "uuid": "9f5a501b-ba5d-45d2-a53b-9b91d6f51a56", 00:10:16.505 "strip_size_kb": 64, 00:10:16.505 "state": "online", 00:10:16.505 "raid_level": "concat", 00:10:16.505 "superblock": true, 00:10:16.505 "num_base_bdevs": 3, 00:10:16.505 "num_base_bdevs_discovered": 3, 00:10:16.505 "num_base_bdevs_operational": 3, 00:10:16.505 "base_bdevs_list": [ 00:10:16.505 { 00:10:16.505 "name": "BaseBdev1", 00:10:16.505 "uuid": "635d2a88-0556-5d83-b496-b79f0470a4fd", 00:10:16.505 "is_configured": true, 00:10:16.505 "data_offset": 2048, 00:10:16.505 "data_size": 63488 00:10:16.505 }, 00:10:16.505 { 00:10:16.505 "name": "BaseBdev2", 00:10:16.505 "uuid": "ee74859d-9a97-5dbc-ad2a-f6a44b995290", 00:10:16.505 "is_configured": true, 00:10:16.505 "data_offset": 2048, 00:10:16.505 "data_size": 63488 00:10:16.505 }, 00:10:16.505 { 00:10:16.505 "name": "BaseBdev3", 00:10:16.505 "uuid": "1ea019a1-ae09-5587-b21a-d3c3d084e1a9", 00:10:16.505 "is_configured": true, 00:10:16.505 "data_offset": 2048, 00:10:16.505 "data_size": 63488 00:10:16.505 } 00:10:16.505 ] 00:10:16.505 }' 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.505 12:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.073 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:17.073 12:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:17.073 [2024-09-30 12:27:28.854019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.029 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.029 "name": "raid_bdev1", 00:10:18.030 "uuid": "9f5a501b-ba5d-45d2-a53b-9b91d6f51a56", 00:10:18.030 "strip_size_kb": 64, 00:10:18.030 "state": "online", 00:10:18.030 "raid_level": "concat", 00:10:18.030 "superblock": true, 00:10:18.030 "num_base_bdevs": 3, 00:10:18.030 "num_base_bdevs_discovered": 3, 00:10:18.030 "num_base_bdevs_operational": 3, 00:10:18.030 "base_bdevs_list": [ 00:10:18.030 { 00:10:18.030 "name": "BaseBdev1", 00:10:18.030 "uuid": "635d2a88-0556-5d83-b496-b79f0470a4fd", 00:10:18.030 "is_configured": true, 00:10:18.030 "data_offset": 2048, 00:10:18.030 "data_size": 63488 00:10:18.030 }, 00:10:18.030 { 00:10:18.030 "name": "BaseBdev2", 00:10:18.030 "uuid": "ee74859d-9a97-5dbc-ad2a-f6a44b995290", 00:10:18.030 "is_configured": true, 00:10:18.030 "data_offset": 2048, 00:10:18.030 "data_size": 63488 00:10:18.030 }, 00:10:18.030 { 00:10:18.030 "name": "BaseBdev3", 00:10:18.030 "uuid": "1ea019a1-ae09-5587-b21a-d3c3d084e1a9", 00:10:18.030 "is_configured": true, 00:10:18.030 "data_offset": 2048, 00:10:18.030 "data_size": 63488 00:10:18.030 } 00:10:18.030 ] 00:10:18.030 }' 00:10:18.030 12:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.030 12:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.603 [2024-09-30 12:27:30.249997] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.603 [2024-09-30 12:27:30.250089] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.603 [2024-09-30 12:27:30.252759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.603 [2024-09-30 12:27:30.252855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.603 [2024-09-30 12:27:30.252918] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.603 [2024-09-30 12:27:30.252969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:18.603 { 00:10:18.603 "results": [ 00:10:18.603 { 00:10:18.603 "job": "raid_bdev1", 00:10:18.603 "core_mask": "0x1", 00:10:18.603 "workload": "randrw", 00:10:18.603 "percentage": 50, 00:10:18.603 "status": "finished", 00:10:18.603 "queue_depth": 1, 00:10:18.603 "io_size": 131072, 00:10:18.603 "runtime": 1.39689, 00:10:18.603 "iops": 15857.368869417061, 00:10:18.603 "mibps": 1982.1711086771327, 00:10:18.603 "io_failed": 1, 00:10:18.603 "io_timeout": 0, 00:10:18.603 "avg_latency_us": 87.62704742619866, 00:10:18.603 "min_latency_us": 25.7117903930131, 00:10:18.603 "max_latency_us": 1380.8349344978167 00:10:18.603 } 00:10:18.603 ], 00:10:18.603 "core_count": 1 00:10:18.603 } 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66994 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 66994 ']' 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 66994 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66994 00:10:18.603 killing process with pid 66994 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66994' 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 66994 00:10:18.603 12:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 66994 00:10:18.603 [2024-09-30 12:27:30.298845] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.863 [2024-09-30 12:27:30.523123] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.242 12:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oWlNTufAg7 00:10:20.242 12:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:20.242 12:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:20.242 12:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:20.242 12:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:20.242 12:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.242 12:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:20.242 12:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:20.242 00:10:20.242 real 0m4.603s 00:10:20.242 user 0m5.402s 00:10:20.242 sys 0m0.582s 00:10:20.242 12:27:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.242 12:27:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.242 ************************************ 00:10:20.242 END TEST raid_read_error_test 00:10:20.242 ************************************ 00:10:20.242 12:27:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:20.242 12:27:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:20.242 12:27:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.242 12:27:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.242 ************************************ 00:10:20.242 START TEST raid_write_error_test 00:10:20.242 ************************************ 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.v2TFJzp37k 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67139 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67139 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67139 ']' 00:10:20.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.242 12:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.242 [2024-09-30 12:27:31.994943] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:20.242 [2024-09-30 12:27:31.995141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67139 ] 00:10:20.502 [2024-09-30 12:27:32.161886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.502 [2024-09-30 12:27:32.357776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.760 [2024-09-30 12:27:32.548721] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.760 [2024-09-30 12:27:32.548772] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.019 BaseBdev1_malloc 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.019 true 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.019 [2024-09-30 12:27:32.856242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:21.019 [2024-09-30 12:27:32.856311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.019 [2024-09-30 12:27:32.856331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:21.019 [2024-09-30 12:27:32.856345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.019 [2024-09-30 12:27:32.858512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.019 [2024-09-30 12:27:32.858562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:21.019 BaseBdev1 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.019 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.279 BaseBdev2_malloc 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.279 true 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.279 [2024-09-30 12:27:32.942204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:21.279 [2024-09-30 12:27:32.942270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.279 [2024-09-30 12:27:32.942305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:21.279 [2024-09-30 12:27:32.942318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.279 [2024-09-30 12:27:32.944469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.279 [2024-09-30 12:27:32.944521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:21.279 BaseBdev2 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.279 BaseBdev3_malloc 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.279 12:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.279 true 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.279 [2024-09-30 12:27:33.009050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:21.279 [2024-09-30 12:27:33.009153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.279 [2024-09-30 12:27:33.009209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:21.279 [2024-09-30 12:27:33.009247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.279 [2024-09-30 12:27:33.011296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.279 [2024-09-30 12:27:33.011402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:21.279 BaseBdev3 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.279 [2024-09-30 12:27:33.021103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.279 [2024-09-30 12:27:33.022935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.279 [2024-09-30 12:27:33.023060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.279 [2024-09-30 12:27:33.023315] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:21.279 [2024-09-30 12:27:33.023380] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:21.279 [2024-09-30 12:27:33.023649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:21.279 [2024-09-30 12:27:33.023876] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:21.279 [2024-09-30 12:27:33.023929] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:21.279 [2024-09-30 12:27:33.024120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.279 "name": "raid_bdev1", 00:10:21.279 "uuid": "4e99cb75-2224-4de1-b247-4f633b7161df", 00:10:21.279 "strip_size_kb": 64, 00:10:21.279 "state": "online", 00:10:21.279 "raid_level": "concat", 00:10:21.279 "superblock": true, 00:10:21.279 "num_base_bdevs": 3, 00:10:21.279 "num_base_bdevs_discovered": 3, 00:10:21.279 "num_base_bdevs_operational": 3, 00:10:21.279 "base_bdevs_list": [ 00:10:21.279 { 00:10:21.279 "name": "BaseBdev1", 00:10:21.279 "uuid": "92c14dd4-296d-5e09-81d8-75503945c891", 00:10:21.279 "is_configured": true, 00:10:21.279 "data_offset": 2048, 00:10:21.279 "data_size": 63488 00:10:21.279 }, 00:10:21.279 { 00:10:21.279 "name": "BaseBdev2", 00:10:21.279 "uuid": "e3284c86-9bb2-5662-bebb-2374ac7c2e53", 00:10:21.279 "is_configured": true, 00:10:21.279 "data_offset": 2048, 00:10:21.279 "data_size": 63488 00:10:21.279 }, 00:10:21.279 { 00:10:21.279 "name": "BaseBdev3", 00:10:21.279 "uuid": "a7a9db7a-a077-5130-98d5-6a785ebc0870", 00:10:21.279 "is_configured": true, 00:10:21.279 "data_offset": 2048, 00:10:21.279 "data_size": 63488 00:10:21.279 } 00:10:21.279 ] 00:10:21.279 }' 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.279 12:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.848 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.848 12:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:21.848 [2024-09-30 12:27:33.561393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.788 "name": "raid_bdev1", 00:10:22.788 "uuid": "4e99cb75-2224-4de1-b247-4f633b7161df", 00:10:22.788 "strip_size_kb": 64, 00:10:22.788 "state": "online", 00:10:22.788 "raid_level": "concat", 00:10:22.788 "superblock": true, 00:10:22.788 "num_base_bdevs": 3, 00:10:22.788 "num_base_bdevs_discovered": 3, 00:10:22.788 "num_base_bdevs_operational": 3, 00:10:22.788 "base_bdevs_list": [ 00:10:22.788 { 00:10:22.788 "name": "BaseBdev1", 00:10:22.788 "uuid": "92c14dd4-296d-5e09-81d8-75503945c891", 00:10:22.788 "is_configured": true, 00:10:22.788 "data_offset": 2048, 00:10:22.788 "data_size": 63488 00:10:22.788 }, 00:10:22.788 { 00:10:22.788 "name": "BaseBdev2", 00:10:22.788 "uuid": "e3284c86-9bb2-5662-bebb-2374ac7c2e53", 00:10:22.788 "is_configured": true, 00:10:22.788 "data_offset": 2048, 00:10:22.788 "data_size": 63488 00:10:22.788 }, 00:10:22.788 { 00:10:22.788 "name": "BaseBdev3", 00:10:22.788 "uuid": "a7a9db7a-a077-5130-98d5-6a785ebc0870", 00:10:22.788 "is_configured": true, 00:10:22.788 "data_offset": 2048, 00:10:22.788 "data_size": 63488 00:10:22.788 } 00:10:22.788 ] 00:10:22.788 }' 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.788 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.357 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.357 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.357 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.357 [2024-09-30 12:27:34.955920] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.357 [2024-09-30 12:27:34.956012] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.357 [2024-09-30 12:27:34.958566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.357 [2024-09-30 12:27:34.958673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.357 [2024-09-30 12:27:34.958737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.358 [2024-09-30 12:27:34.958837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67139 00:10:23.358 { 00:10:23.358 "results": [ 00:10:23.358 { 00:10:23.358 "job": "raid_bdev1", 00:10:23.358 "core_mask": "0x1", 00:10:23.358 "workload": "randrw", 00:10:23.358 "percentage": 50, 00:10:23.358 "status": "finished", 00:10:23.358 "queue_depth": 1, 00:10:23.358 "io_size": 131072, 00:10:23.358 "runtime": 1.395583, 00:10:23.358 "iops": 16463.370505373023, 00:10:23.358 "mibps": 2057.921313171628, 00:10:23.358 "io_failed": 1, 00:10:23.358 "io_timeout": 0, 00:10:23.358 "avg_latency_us": 84.34456465198824, 00:10:23.358 "min_latency_us": 24.929257641921396, 00:10:23.358 "max_latency_us": 1387.989519650655 00:10:23.358 } 00:10:23.358 ], 00:10:23.358 "core_count": 1 00:10:23.358 } 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67139 ']' 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67139 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67139 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67139' 00:10:23.358 killing process with pid 67139 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67139 00:10:23.358 [2024-09-30 12:27:35.001374] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.358 12:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67139 00:10:23.358 [2024-09-30 12:27:35.222494] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.736 12:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.v2TFJzp37k 00:10:24.736 12:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:24.736 12:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:24.736 12:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:24.736 12:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:24.736 12:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.736 12:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.736 ************************************ 00:10:24.736 END TEST raid_write_error_test 00:10:24.736 ************************************ 00:10:24.736 12:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:24.736 00:10:24.736 real 0m4.630s 00:10:24.736 user 0m5.415s 00:10:24.736 sys 0m0.577s 00:10:24.736 12:27:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.736 12:27:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.736 12:27:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:24.736 12:27:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:24.736 12:27:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:24.736 12:27:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.736 12:27:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.736 ************************************ 00:10:24.736 START TEST raid_state_function_test 00:10:24.736 ************************************ 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67283 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67283' 00:10:24.736 Process raid pid: 67283 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67283 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67283 ']' 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.736 12:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.995 [2024-09-30 12:27:36.697967] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:24.995 [2024-09-30 12:27:36.698668] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.995 [2024-09-30 12:27:36.868434] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.254 [2024-09-30 12:27:37.066480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.513 [2024-09-30 12:27:37.267816] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.513 [2024-09-30 12:27:37.267940] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.772 [2024-09-30 12:27:37.511013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.772 [2024-09-30 12:27:37.511074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.772 [2024-09-30 12:27:37.511086] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.772 [2024-09-30 12:27:37.511097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.772 [2024-09-30 12:27:37.511108] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.772 [2024-09-30 12:27:37.511121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.772 "name": "Existed_Raid", 00:10:25.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.772 "strip_size_kb": 0, 00:10:25.772 "state": "configuring", 00:10:25.772 "raid_level": "raid1", 00:10:25.772 "superblock": false, 00:10:25.772 "num_base_bdevs": 3, 00:10:25.772 "num_base_bdevs_discovered": 0, 00:10:25.772 "num_base_bdevs_operational": 3, 00:10:25.772 "base_bdevs_list": [ 00:10:25.772 { 00:10:25.772 "name": "BaseBdev1", 00:10:25.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.772 "is_configured": false, 00:10:25.772 "data_offset": 0, 00:10:25.772 "data_size": 0 00:10:25.772 }, 00:10:25.772 { 00:10:25.772 "name": "BaseBdev2", 00:10:25.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.772 "is_configured": false, 00:10:25.772 "data_offset": 0, 00:10:25.772 "data_size": 0 00:10:25.772 }, 00:10:25.772 { 00:10:25.772 "name": "BaseBdev3", 00:10:25.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.772 "is_configured": false, 00:10:25.772 "data_offset": 0, 00:10:25.772 "data_size": 0 00:10:25.772 } 00:10:25.772 ] 00:10:25.772 }' 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.772 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.340 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.340 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.340 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.340 [2024-09-30 12:27:37.958200] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.340 [2024-09-30 12:27:37.958302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:26.340 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.340 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.340 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.340 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.340 [2024-09-30 12:27:37.966176] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.340 [2024-09-30 12:27:37.966269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.340 [2024-09-30 12:27:37.966319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.340 [2024-09-30 12:27:37.966347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.340 [2024-09-30 12:27:37.966370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.340 [2024-09-30 12:27:37.966397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.340 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.340 12:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.340 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.340 12:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.340 [2024-09-30 12:27:38.020351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.340 BaseBdev1 00:10:26.340 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.340 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:26.340 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.341 [ 00:10:26.341 { 00:10:26.341 "name": "BaseBdev1", 00:10:26.341 "aliases": [ 00:10:26.341 "a3c479ac-d761-44d3-b5ed-f9b92ac69539" 00:10:26.341 ], 00:10:26.341 "product_name": "Malloc disk", 00:10:26.341 "block_size": 512, 00:10:26.341 "num_blocks": 65536, 00:10:26.341 "uuid": "a3c479ac-d761-44d3-b5ed-f9b92ac69539", 00:10:26.341 "assigned_rate_limits": { 00:10:26.341 "rw_ios_per_sec": 0, 00:10:26.341 "rw_mbytes_per_sec": 0, 00:10:26.341 "r_mbytes_per_sec": 0, 00:10:26.341 "w_mbytes_per_sec": 0 00:10:26.341 }, 00:10:26.341 "claimed": true, 00:10:26.341 "claim_type": "exclusive_write", 00:10:26.341 "zoned": false, 00:10:26.341 "supported_io_types": { 00:10:26.341 "read": true, 00:10:26.341 "write": true, 00:10:26.341 "unmap": true, 00:10:26.341 "flush": true, 00:10:26.341 "reset": true, 00:10:26.341 "nvme_admin": false, 00:10:26.341 "nvme_io": false, 00:10:26.341 "nvme_io_md": false, 00:10:26.341 "write_zeroes": true, 00:10:26.341 "zcopy": true, 00:10:26.341 "get_zone_info": false, 00:10:26.341 "zone_management": false, 00:10:26.341 "zone_append": false, 00:10:26.341 "compare": false, 00:10:26.341 "compare_and_write": false, 00:10:26.341 "abort": true, 00:10:26.341 "seek_hole": false, 00:10:26.341 "seek_data": false, 00:10:26.341 "copy": true, 00:10:26.341 "nvme_iov_md": false 00:10:26.341 }, 00:10:26.341 "memory_domains": [ 00:10:26.341 { 00:10:26.341 "dma_device_id": "system", 00:10:26.341 "dma_device_type": 1 00:10:26.341 }, 00:10:26.341 { 00:10:26.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.341 "dma_device_type": 2 00:10:26.341 } 00:10:26.341 ], 00:10:26.341 "driver_specific": {} 00:10:26.341 } 00:10:26.341 ] 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.341 "name": "Existed_Raid", 00:10:26.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.341 "strip_size_kb": 0, 00:10:26.341 "state": "configuring", 00:10:26.341 "raid_level": "raid1", 00:10:26.341 "superblock": false, 00:10:26.341 "num_base_bdevs": 3, 00:10:26.341 "num_base_bdevs_discovered": 1, 00:10:26.341 "num_base_bdevs_operational": 3, 00:10:26.341 "base_bdevs_list": [ 00:10:26.341 { 00:10:26.341 "name": "BaseBdev1", 00:10:26.341 "uuid": "a3c479ac-d761-44d3-b5ed-f9b92ac69539", 00:10:26.341 "is_configured": true, 00:10:26.341 "data_offset": 0, 00:10:26.341 "data_size": 65536 00:10:26.341 }, 00:10:26.341 { 00:10:26.341 "name": "BaseBdev2", 00:10:26.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.341 "is_configured": false, 00:10:26.341 "data_offset": 0, 00:10:26.341 "data_size": 0 00:10:26.341 }, 00:10:26.341 { 00:10:26.341 "name": "BaseBdev3", 00:10:26.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.341 "is_configured": false, 00:10:26.341 "data_offset": 0, 00:10:26.341 "data_size": 0 00:10:26.341 } 00:10:26.341 ] 00:10:26.341 }' 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.341 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.910 [2024-09-30 12:27:38.503569] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.910 [2024-09-30 12:27:38.503684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.910 [2024-09-30 12:27:38.511581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.910 [2024-09-30 12:27:38.513473] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.910 [2024-09-30 12:27:38.513525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.910 [2024-09-30 12:27:38.513537] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.910 [2024-09-30 12:27:38.513548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.910 "name": "Existed_Raid", 00:10:26.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.910 "strip_size_kb": 0, 00:10:26.910 "state": "configuring", 00:10:26.910 "raid_level": "raid1", 00:10:26.910 "superblock": false, 00:10:26.910 "num_base_bdevs": 3, 00:10:26.910 "num_base_bdevs_discovered": 1, 00:10:26.910 "num_base_bdevs_operational": 3, 00:10:26.910 "base_bdevs_list": [ 00:10:26.910 { 00:10:26.910 "name": "BaseBdev1", 00:10:26.910 "uuid": "a3c479ac-d761-44d3-b5ed-f9b92ac69539", 00:10:26.910 "is_configured": true, 00:10:26.910 "data_offset": 0, 00:10:26.910 "data_size": 65536 00:10:26.910 }, 00:10:26.910 { 00:10:26.910 "name": "BaseBdev2", 00:10:26.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.910 "is_configured": false, 00:10:26.910 "data_offset": 0, 00:10:26.910 "data_size": 0 00:10:26.910 }, 00:10:26.910 { 00:10:26.910 "name": "BaseBdev3", 00:10:26.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.910 "is_configured": false, 00:10:26.910 "data_offset": 0, 00:10:26.910 "data_size": 0 00:10:26.910 } 00:10:26.910 ] 00:10:26.910 }' 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.910 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.170 [2024-09-30 12:27:38.957799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.170 BaseBdev2 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.170 [ 00:10:27.170 { 00:10:27.170 "name": "BaseBdev2", 00:10:27.170 "aliases": [ 00:10:27.170 "a65427dd-4ff6-4eec-af02-22808758f738" 00:10:27.170 ], 00:10:27.170 "product_name": "Malloc disk", 00:10:27.170 "block_size": 512, 00:10:27.170 "num_blocks": 65536, 00:10:27.170 "uuid": "a65427dd-4ff6-4eec-af02-22808758f738", 00:10:27.170 "assigned_rate_limits": { 00:10:27.170 "rw_ios_per_sec": 0, 00:10:27.170 "rw_mbytes_per_sec": 0, 00:10:27.170 "r_mbytes_per_sec": 0, 00:10:27.170 "w_mbytes_per_sec": 0 00:10:27.170 }, 00:10:27.170 "claimed": true, 00:10:27.170 "claim_type": "exclusive_write", 00:10:27.170 "zoned": false, 00:10:27.170 "supported_io_types": { 00:10:27.170 "read": true, 00:10:27.170 "write": true, 00:10:27.170 "unmap": true, 00:10:27.170 "flush": true, 00:10:27.170 "reset": true, 00:10:27.170 "nvme_admin": false, 00:10:27.170 "nvme_io": false, 00:10:27.170 "nvme_io_md": false, 00:10:27.170 "write_zeroes": true, 00:10:27.170 "zcopy": true, 00:10:27.170 "get_zone_info": false, 00:10:27.170 "zone_management": false, 00:10:27.170 "zone_append": false, 00:10:27.170 "compare": false, 00:10:27.170 "compare_and_write": false, 00:10:27.170 "abort": true, 00:10:27.170 "seek_hole": false, 00:10:27.170 "seek_data": false, 00:10:27.170 "copy": true, 00:10:27.170 "nvme_iov_md": false 00:10:27.170 }, 00:10:27.170 "memory_domains": [ 00:10:27.170 { 00:10:27.170 "dma_device_id": "system", 00:10:27.170 "dma_device_type": 1 00:10:27.170 }, 00:10:27.170 { 00:10:27.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.170 "dma_device_type": 2 00:10:27.170 } 00:10:27.170 ], 00:10:27.170 "driver_specific": {} 00:10:27.170 } 00:10:27.170 ] 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.170 12:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.170 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.170 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.170 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.170 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.170 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.170 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.170 "name": "Existed_Raid", 00:10:27.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.170 "strip_size_kb": 0, 00:10:27.170 "state": "configuring", 00:10:27.170 "raid_level": "raid1", 00:10:27.170 "superblock": false, 00:10:27.170 "num_base_bdevs": 3, 00:10:27.170 "num_base_bdevs_discovered": 2, 00:10:27.170 "num_base_bdevs_operational": 3, 00:10:27.170 "base_bdevs_list": [ 00:10:27.170 { 00:10:27.170 "name": "BaseBdev1", 00:10:27.170 "uuid": "a3c479ac-d761-44d3-b5ed-f9b92ac69539", 00:10:27.170 "is_configured": true, 00:10:27.170 "data_offset": 0, 00:10:27.171 "data_size": 65536 00:10:27.171 }, 00:10:27.171 { 00:10:27.171 "name": "BaseBdev2", 00:10:27.171 "uuid": "a65427dd-4ff6-4eec-af02-22808758f738", 00:10:27.171 "is_configured": true, 00:10:27.171 "data_offset": 0, 00:10:27.171 "data_size": 65536 00:10:27.171 }, 00:10:27.171 { 00:10:27.171 "name": "BaseBdev3", 00:10:27.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.171 "is_configured": false, 00:10:27.171 "data_offset": 0, 00:10:27.171 "data_size": 0 00:10:27.171 } 00:10:27.171 ] 00:10:27.171 }' 00:10:27.171 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.171 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.737 [2024-09-30 12:27:39.440534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.737 [2024-09-30 12:27:39.440587] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:27.737 [2024-09-30 12:27:39.440606] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:27.737 [2024-09-30 12:27:39.440910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:27.737 [2024-09-30 12:27:39.441102] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:27.737 [2024-09-30 12:27:39.441119] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:27.737 [2024-09-30 12:27:39.441442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.737 BaseBdev3 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.737 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.737 [ 00:10:27.737 { 00:10:27.737 "name": "BaseBdev3", 00:10:27.737 "aliases": [ 00:10:27.737 "fa73a120-3e3c-4199-89ff-06de506282e4" 00:10:27.737 ], 00:10:27.737 "product_name": "Malloc disk", 00:10:27.737 "block_size": 512, 00:10:27.737 "num_blocks": 65536, 00:10:27.737 "uuid": "fa73a120-3e3c-4199-89ff-06de506282e4", 00:10:27.737 "assigned_rate_limits": { 00:10:27.737 "rw_ios_per_sec": 0, 00:10:27.737 "rw_mbytes_per_sec": 0, 00:10:27.737 "r_mbytes_per_sec": 0, 00:10:27.737 "w_mbytes_per_sec": 0 00:10:27.738 }, 00:10:27.738 "claimed": true, 00:10:27.738 "claim_type": "exclusive_write", 00:10:27.738 "zoned": false, 00:10:27.738 "supported_io_types": { 00:10:27.738 "read": true, 00:10:27.738 "write": true, 00:10:27.738 "unmap": true, 00:10:27.738 "flush": true, 00:10:27.738 "reset": true, 00:10:27.738 "nvme_admin": false, 00:10:27.738 "nvme_io": false, 00:10:27.738 "nvme_io_md": false, 00:10:27.738 "write_zeroes": true, 00:10:27.738 "zcopy": true, 00:10:27.738 "get_zone_info": false, 00:10:27.738 "zone_management": false, 00:10:27.738 "zone_append": false, 00:10:27.738 "compare": false, 00:10:27.738 "compare_and_write": false, 00:10:27.738 "abort": true, 00:10:27.738 "seek_hole": false, 00:10:27.738 "seek_data": false, 00:10:27.738 "copy": true, 00:10:27.738 "nvme_iov_md": false 00:10:27.738 }, 00:10:27.738 "memory_domains": [ 00:10:27.738 { 00:10:27.738 "dma_device_id": "system", 00:10:27.738 "dma_device_type": 1 00:10:27.738 }, 00:10:27.738 { 00:10:27.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.738 "dma_device_type": 2 00:10:27.738 } 00:10:27.738 ], 00:10:27.738 "driver_specific": {} 00:10:27.738 } 00:10:27.738 ] 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.738 "name": "Existed_Raid", 00:10:27.738 "uuid": "a0675efa-3858-4c5b-98a3-585140ac3bf3", 00:10:27.738 "strip_size_kb": 0, 00:10:27.738 "state": "online", 00:10:27.738 "raid_level": "raid1", 00:10:27.738 "superblock": false, 00:10:27.738 "num_base_bdevs": 3, 00:10:27.738 "num_base_bdevs_discovered": 3, 00:10:27.738 "num_base_bdevs_operational": 3, 00:10:27.738 "base_bdevs_list": [ 00:10:27.738 { 00:10:27.738 "name": "BaseBdev1", 00:10:27.738 "uuid": "a3c479ac-d761-44d3-b5ed-f9b92ac69539", 00:10:27.738 "is_configured": true, 00:10:27.738 "data_offset": 0, 00:10:27.738 "data_size": 65536 00:10:27.738 }, 00:10:27.738 { 00:10:27.738 "name": "BaseBdev2", 00:10:27.738 "uuid": "a65427dd-4ff6-4eec-af02-22808758f738", 00:10:27.738 "is_configured": true, 00:10:27.738 "data_offset": 0, 00:10:27.738 "data_size": 65536 00:10:27.738 }, 00:10:27.738 { 00:10:27.738 "name": "BaseBdev3", 00:10:27.738 "uuid": "fa73a120-3e3c-4199-89ff-06de506282e4", 00:10:27.738 "is_configured": true, 00:10:27.738 "data_offset": 0, 00:10:27.738 "data_size": 65536 00:10:27.738 } 00:10:27.738 ] 00:10:27.738 }' 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.738 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.304 [2024-09-30 12:27:39.912166] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.304 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.304 "name": "Existed_Raid", 00:10:28.304 "aliases": [ 00:10:28.304 "a0675efa-3858-4c5b-98a3-585140ac3bf3" 00:10:28.304 ], 00:10:28.304 "product_name": "Raid Volume", 00:10:28.304 "block_size": 512, 00:10:28.304 "num_blocks": 65536, 00:10:28.304 "uuid": "a0675efa-3858-4c5b-98a3-585140ac3bf3", 00:10:28.304 "assigned_rate_limits": { 00:10:28.304 "rw_ios_per_sec": 0, 00:10:28.304 "rw_mbytes_per_sec": 0, 00:10:28.304 "r_mbytes_per_sec": 0, 00:10:28.304 "w_mbytes_per_sec": 0 00:10:28.304 }, 00:10:28.304 "claimed": false, 00:10:28.304 "zoned": false, 00:10:28.304 "supported_io_types": { 00:10:28.304 "read": true, 00:10:28.304 "write": true, 00:10:28.304 "unmap": false, 00:10:28.304 "flush": false, 00:10:28.304 "reset": true, 00:10:28.304 "nvme_admin": false, 00:10:28.304 "nvme_io": false, 00:10:28.304 "nvme_io_md": false, 00:10:28.304 "write_zeroes": true, 00:10:28.305 "zcopy": false, 00:10:28.305 "get_zone_info": false, 00:10:28.305 "zone_management": false, 00:10:28.305 "zone_append": false, 00:10:28.305 "compare": false, 00:10:28.305 "compare_and_write": false, 00:10:28.305 "abort": false, 00:10:28.305 "seek_hole": false, 00:10:28.305 "seek_data": false, 00:10:28.305 "copy": false, 00:10:28.305 "nvme_iov_md": false 00:10:28.305 }, 00:10:28.305 "memory_domains": [ 00:10:28.305 { 00:10:28.305 "dma_device_id": "system", 00:10:28.305 "dma_device_type": 1 00:10:28.305 }, 00:10:28.305 { 00:10:28.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.305 "dma_device_type": 2 00:10:28.305 }, 00:10:28.305 { 00:10:28.305 "dma_device_id": "system", 00:10:28.305 "dma_device_type": 1 00:10:28.305 }, 00:10:28.305 { 00:10:28.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.305 "dma_device_type": 2 00:10:28.305 }, 00:10:28.305 { 00:10:28.305 "dma_device_id": "system", 00:10:28.305 "dma_device_type": 1 00:10:28.305 }, 00:10:28.305 { 00:10:28.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.305 "dma_device_type": 2 00:10:28.305 } 00:10:28.305 ], 00:10:28.305 "driver_specific": { 00:10:28.305 "raid": { 00:10:28.305 "uuid": "a0675efa-3858-4c5b-98a3-585140ac3bf3", 00:10:28.305 "strip_size_kb": 0, 00:10:28.305 "state": "online", 00:10:28.305 "raid_level": "raid1", 00:10:28.305 "superblock": false, 00:10:28.305 "num_base_bdevs": 3, 00:10:28.305 "num_base_bdevs_discovered": 3, 00:10:28.305 "num_base_bdevs_operational": 3, 00:10:28.305 "base_bdevs_list": [ 00:10:28.305 { 00:10:28.305 "name": "BaseBdev1", 00:10:28.305 "uuid": "a3c479ac-d761-44d3-b5ed-f9b92ac69539", 00:10:28.305 "is_configured": true, 00:10:28.305 "data_offset": 0, 00:10:28.305 "data_size": 65536 00:10:28.305 }, 00:10:28.305 { 00:10:28.305 "name": "BaseBdev2", 00:10:28.305 "uuid": "a65427dd-4ff6-4eec-af02-22808758f738", 00:10:28.305 "is_configured": true, 00:10:28.305 "data_offset": 0, 00:10:28.305 "data_size": 65536 00:10:28.305 }, 00:10:28.305 { 00:10:28.305 "name": "BaseBdev3", 00:10:28.305 "uuid": "fa73a120-3e3c-4199-89ff-06de506282e4", 00:10:28.305 "is_configured": true, 00:10:28.305 "data_offset": 0, 00:10:28.305 "data_size": 65536 00:10:28.305 } 00:10:28.305 ] 00:10:28.305 } 00:10:28.305 } 00:10:28.305 }' 00:10:28.305 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.305 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:28.305 BaseBdev2 00:10:28.305 BaseBdev3' 00:10:28.305 12:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.305 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.305 [2024-09-30 12:27:40.171519] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.563 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.564 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.564 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.564 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.564 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.564 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.564 "name": "Existed_Raid", 00:10:28.564 "uuid": "a0675efa-3858-4c5b-98a3-585140ac3bf3", 00:10:28.564 "strip_size_kb": 0, 00:10:28.564 "state": "online", 00:10:28.564 "raid_level": "raid1", 00:10:28.564 "superblock": false, 00:10:28.564 "num_base_bdevs": 3, 00:10:28.564 "num_base_bdevs_discovered": 2, 00:10:28.564 "num_base_bdevs_operational": 2, 00:10:28.564 "base_bdevs_list": [ 00:10:28.564 { 00:10:28.564 "name": null, 00:10:28.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.564 "is_configured": false, 00:10:28.564 "data_offset": 0, 00:10:28.564 "data_size": 65536 00:10:28.564 }, 00:10:28.564 { 00:10:28.564 "name": "BaseBdev2", 00:10:28.564 "uuid": "a65427dd-4ff6-4eec-af02-22808758f738", 00:10:28.564 "is_configured": true, 00:10:28.564 "data_offset": 0, 00:10:28.564 "data_size": 65536 00:10:28.564 }, 00:10:28.564 { 00:10:28.564 "name": "BaseBdev3", 00:10:28.564 "uuid": "fa73a120-3e3c-4199-89ff-06de506282e4", 00:10:28.564 "is_configured": true, 00:10:28.564 "data_offset": 0, 00:10:28.564 "data_size": 65536 00:10:28.564 } 00:10:28.564 ] 00:10:28.564 }' 00:10:28.564 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.564 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.821 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:28.821 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.821 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.821 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.822 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.822 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.080 [2024-09-30 12:27:40.741908] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.080 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.080 [2024-09-30 12:27:40.894437] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:29.080 [2024-09-30 12:27:40.894554] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.339 [2024-09-30 12:27:40.985822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.339 [2024-09-30 12:27:40.985882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.339 [2024-09-30 12:27:40.985897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:29.339 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.339 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.339 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.339 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:29.339 12:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.339 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.339 12:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.339 BaseBdev2 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.339 [ 00:10:29.339 { 00:10:29.339 "name": "BaseBdev2", 00:10:29.339 "aliases": [ 00:10:29.339 "1fe55737-5b6e-45aa-9a6f-1d03fa456849" 00:10:29.339 ], 00:10:29.339 "product_name": "Malloc disk", 00:10:29.339 "block_size": 512, 00:10:29.339 "num_blocks": 65536, 00:10:29.339 "uuid": "1fe55737-5b6e-45aa-9a6f-1d03fa456849", 00:10:29.339 "assigned_rate_limits": { 00:10:29.339 "rw_ios_per_sec": 0, 00:10:29.339 "rw_mbytes_per_sec": 0, 00:10:29.339 "r_mbytes_per_sec": 0, 00:10:29.339 "w_mbytes_per_sec": 0 00:10:29.339 }, 00:10:29.339 "claimed": false, 00:10:29.339 "zoned": false, 00:10:29.339 "supported_io_types": { 00:10:29.339 "read": true, 00:10:29.339 "write": true, 00:10:29.339 "unmap": true, 00:10:29.339 "flush": true, 00:10:29.339 "reset": true, 00:10:29.339 "nvme_admin": false, 00:10:29.339 "nvme_io": false, 00:10:29.339 "nvme_io_md": false, 00:10:29.339 "write_zeroes": true, 00:10:29.339 "zcopy": true, 00:10:29.339 "get_zone_info": false, 00:10:29.339 "zone_management": false, 00:10:29.339 "zone_append": false, 00:10:29.339 "compare": false, 00:10:29.339 "compare_and_write": false, 00:10:29.339 "abort": true, 00:10:29.339 "seek_hole": false, 00:10:29.339 "seek_data": false, 00:10:29.339 "copy": true, 00:10:29.339 "nvme_iov_md": false 00:10:29.339 }, 00:10:29.339 "memory_domains": [ 00:10:29.339 { 00:10:29.339 "dma_device_id": "system", 00:10:29.339 "dma_device_type": 1 00:10:29.339 }, 00:10:29.339 { 00:10:29.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.339 "dma_device_type": 2 00:10:29.339 } 00:10:29.339 ], 00:10:29.339 "driver_specific": {} 00:10:29.339 } 00:10:29.339 ] 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.339 BaseBdev3 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.339 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.339 [ 00:10:29.339 { 00:10:29.339 "name": "BaseBdev3", 00:10:29.339 "aliases": [ 00:10:29.339 "1913f736-12fc-4215-92d2-74a17fed184e" 00:10:29.339 ], 00:10:29.339 "product_name": "Malloc disk", 00:10:29.339 "block_size": 512, 00:10:29.339 "num_blocks": 65536, 00:10:29.339 "uuid": "1913f736-12fc-4215-92d2-74a17fed184e", 00:10:29.339 "assigned_rate_limits": { 00:10:29.339 "rw_ios_per_sec": 0, 00:10:29.339 "rw_mbytes_per_sec": 0, 00:10:29.339 "r_mbytes_per_sec": 0, 00:10:29.339 "w_mbytes_per_sec": 0 00:10:29.339 }, 00:10:29.339 "claimed": false, 00:10:29.339 "zoned": false, 00:10:29.339 "supported_io_types": { 00:10:29.339 "read": true, 00:10:29.339 "write": true, 00:10:29.339 "unmap": true, 00:10:29.339 "flush": true, 00:10:29.339 "reset": true, 00:10:29.339 "nvme_admin": false, 00:10:29.339 "nvme_io": false, 00:10:29.339 "nvme_io_md": false, 00:10:29.339 "write_zeroes": true, 00:10:29.339 "zcopy": true, 00:10:29.339 "get_zone_info": false, 00:10:29.339 "zone_management": false, 00:10:29.339 "zone_append": false, 00:10:29.339 "compare": false, 00:10:29.340 "compare_and_write": false, 00:10:29.340 "abort": true, 00:10:29.340 "seek_hole": false, 00:10:29.340 "seek_data": false, 00:10:29.340 "copy": true, 00:10:29.340 "nvme_iov_md": false 00:10:29.340 }, 00:10:29.340 "memory_domains": [ 00:10:29.340 { 00:10:29.340 "dma_device_id": "system", 00:10:29.340 "dma_device_type": 1 00:10:29.340 }, 00:10:29.340 { 00:10:29.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.340 "dma_device_type": 2 00:10:29.340 } 00:10:29.340 ], 00:10:29.340 "driver_specific": {} 00:10:29.340 } 00:10:29.340 ] 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.340 [2024-09-30 12:27:41.207915] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.340 [2024-09-30 12:27:41.207978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.340 [2024-09-30 12:27:41.208001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.340 [2024-09-30 12:27:41.209907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.340 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.598 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.598 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.598 "name": "Existed_Raid", 00:10:29.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.598 "strip_size_kb": 0, 00:10:29.598 "state": "configuring", 00:10:29.598 "raid_level": "raid1", 00:10:29.598 "superblock": false, 00:10:29.598 "num_base_bdevs": 3, 00:10:29.598 "num_base_bdevs_discovered": 2, 00:10:29.598 "num_base_bdevs_operational": 3, 00:10:29.598 "base_bdevs_list": [ 00:10:29.598 { 00:10:29.598 "name": "BaseBdev1", 00:10:29.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.598 "is_configured": false, 00:10:29.598 "data_offset": 0, 00:10:29.598 "data_size": 0 00:10:29.598 }, 00:10:29.598 { 00:10:29.598 "name": "BaseBdev2", 00:10:29.598 "uuid": "1fe55737-5b6e-45aa-9a6f-1d03fa456849", 00:10:29.598 "is_configured": true, 00:10:29.598 "data_offset": 0, 00:10:29.598 "data_size": 65536 00:10:29.598 }, 00:10:29.598 { 00:10:29.598 "name": "BaseBdev3", 00:10:29.598 "uuid": "1913f736-12fc-4215-92d2-74a17fed184e", 00:10:29.598 "is_configured": true, 00:10:29.598 "data_offset": 0, 00:10:29.598 "data_size": 65536 00:10:29.598 } 00:10:29.598 ] 00:10:29.598 }' 00:10:29.598 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.598 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.858 [2024-09-30 12:27:41.639206] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.858 "name": "Existed_Raid", 00:10:29.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.858 "strip_size_kb": 0, 00:10:29.858 "state": "configuring", 00:10:29.858 "raid_level": "raid1", 00:10:29.858 "superblock": false, 00:10:29.858 "num_base_bdevs": 3, 00:10:29.858 "num_base_bdevs_discovered": 1, 00:10:29.858 "num_base_bdevs_operational": 3, 00:10:29.858 "base_bdevs_list": [ 00:10:29.858 { 00:10:29.858 "name": "BaseBdev1", 00:10:29.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.858 "is_configured": false, 00:10:29.858 "data_offset": 0, 00:10:29.858 "data_size": 0 00:10:29.858 }, 00:10:29.858 { 00:10:29.858 "name": null, 00:10:29.858 "uuid": "1fe55737-5b6e-45aa-9a6f-1d03fa456849", 00:10:29.858 "is_configured": false, 00:10:29.858 "data_offset": 0, 00:10:29.858 "data_size": 65536 00:10:29.858 }, 00:10:29.858 { 00:10:29.858 "name": "BaseBdev3", 00:10:29.858 "uuid": "1913f736-12fc-4215-92d2-74a17fed184e", 00:10:29.858 "is_configured": true, 00:10:29.858 "data_offset": 0, 00:10:29.858 "data_size": 65536 00:10:29.858 } 00:10:29.858 ] 00:10:29.858 }' 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.858 12:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.426 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.426 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.426 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.426 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.426 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.426 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:30.426 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.427 BaseBdev1 00:10:30.427 [2024-09-30 12:27:42.182214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.427 [ 00:10:30.427 { 00:10:30.427 "name": "BaseBdev1", 00:10:30.427 "aliases": [ 00:10:30.427 "f9e564c1-cf95-4e43-9523-f70a004b1b10" 00:10:30.427 ], 00:10:30.427 "product_name": "Malloc disk", 00:10:30.427 "block_size": 512, 00:10:30.427 "num_blocks": 65536, 00:10:30.427 "uuid": "f9e564c1-cf95-4e43-9523-f70a004b1b10", 00:10:30.427 "assigned_rate_limits": { 00:10:30.427 "rw_ios_per_sec": 0, 00:10:30.427 "rw_mbytes_per_sec": 0, 00:10:30.427 "r_mbytes_per_sec": 0, 00:10:30.427 "w_mbytes_per_sec": 0 00:10:30.427 }, 00:10:30.427 "claimed": true, 00:10:30.427 "claim_type": "exclusive_write", 00:10:30.427 "zoned": false, 00:10:30.427 "supported_io_types": { 00:10:30.427 "read": true, 00:10:30.427 "write": true, 00:10:30.427 "unmap": true, 00:10:30.427 "flush": true, 00:10:30.427 "reset": true, 00:10:30.427 "nvme_admin": false, 00:10:30.427 "nvme_io": false, 00:10:30.427 "nvme_io_md": false, 00:10:30.427 "write_zeroes": true, 00:10:30.427 "zcopy": true, 00:10:30.427 "get_zone_info": false, 00:10:30.427 "zone_management": false, 00:10:30.427 "zone_append": false, 00:10:30.427 "compare": false, 00:10:30.427 "compare_and_write": false, 00:10:30.427 "abort": true, 00:10:30.427 "seek_hole": false, 00:10:30.427 "seek_data": false, 00:10:30.427 "copy": true, 00:10:30.427 "nvme_iov_md": false 00:10:30.427 }, 00:10:30.427 "memory_domains": [ 00:10:30.427 { 00:10:30.427 "dma_device_id": "system", 00:10:30.427 "dma_device_type": 1 00:10:30.427 }, 00:10:30.427 { 00:10:30.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.427 "dma_device_type": 2 00:10:30.427 } 00:10:30.427 ], 00:10:30.427 "driver_specific": {} 00:10:30.427 } 00:10:30.427 ] 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.427 "name": "Existed_Raid", 00:10:30.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.427 "strip_size_kb": 0, 00:10:30.427 "state": "configuring", 00:10:30.427 "raid_level": "raid1", 00:10:30.427 "superblock": false, 00:10:30.427 "num_base_bdevs": 3, 00:10:30.427 "num_base_bdevs_discovered": 2, 00:10:30.427 "num_base_bdevs_operational": 3, 00:10:30.427 "base_bdevs_list": [ 00:10:30.427 { 00:10:30.427 "name": "BaseBdev1", 00:10:30.427 "uuid": "f9e564c1-cf95-4e43-9523-f70a004b1b10", 00:10:30.427 "is_configured": true, 00:10:30.427 "data_offset": 0, 00:10:30.427 "data_size": 65536 00:10:30.427 }, 00:10:30.427 { 00:10:30.427 "name": null, 00:10:30.427 "uuid": "1fe55737-5b6e-45aa-9a6f-1d03fa456849", 00:10:30.427 "is_configured": false, 00:10:30.427 "data_offset": 0, 00:10:30.427 "data_size": 65536 00:10:30.427 }, 00:10:30.427 { 00:10:30.427 "name": "BaseBdev3", 00:10:30.427 "uuid": "1913f736-12fc-4215-92d2-74a17fed184e", 00:10:30.427 "is_configured": true, 00:10:30.427 "data_offset": 0, 00:10:30.427 "data_size": 65536 00:10:30.427 } 00:10:30.427 ] 00:10:30.427 }' 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.427 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.997 [2024-09-30 12:27:42.705395] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.997 "name": "Existed_Raid", 00:10:30.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.997 "strip_size_kb": 0, 00:10:30.997 "state": "configuring", 00:10:30.997 "raid_level": "raid1", 00:10:30.997 "superblock": false, 00:10:30.997 "num_base_bdevs": 3, 00:10:30.997 "num_base_bdevs_discovered": 1, 00:10:30.997 "num_base_bdevs_operational": 3, 00:10:30.997 "base_bdevs_list": [ 00:10:30.997 { 00:10:30.997 "name": "BaseBdev1", 00:10:30.997 "uuid": "f9e564c1-cf95-4e43-9523-f70a004b1b10", 00:10:30.997 "is_configured": true, 00:10:30.997 "data_offset": 0, 00:10:30.997 "data_size": 65536 00:10:30.997 }, 00:10:30.997 { 00:10:30.997 "name": null, 00:10:30.997 "uuid": "1fe55737-5b6e-45aa-9a6f-1d03fa456849", 00:10:30.997 "is_configured": false, 00:10:30.997 "data_offset": 0, 00:10:30.997 "data_size": 65536 00:10:30.997 }, 00:10:30.997 { 00:10:30.997 "name": null, 00:10:30.997 "uuid": "1913f736-12fc-4215-92d2-74a17fed184e", 00:10:30.997 "is_configured": false, 00:10:30.997 "data_offset": 0, 00:10:30.997 "data_size": 65536 00:10:30.997 } 00:10:30.997 ] 00:10:30.997 }' 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.997 12:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.257 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.257 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.257 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.257 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.257 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.516 [2024-09-30 12:27:43.168630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.516 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.517 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.517 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.517 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.517 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.517 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.517 "name": "Existed_Raid", 00:10:31.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.517 "strip_size_kb": 0, 00:10:31.517 "state": "configuring", 00:10:31.517 "raid_level": "raid1", 00:10:31.517 "superblock": false, 00:10:31.517 "num_base_bdevs": 3, 00:10:31.517 "num_base_bdevs_discovered": 2, 00:10:31.517 "num_base_bdevs_operational": 3, 00:10:31.517 "base_bdevs_list": [ 00:10:31.517 { 00:10:31.517 "name": "BaseBdev1", 00:10:31.517 "uuid": "f9e564c1-cf95-4e43-9523-f70a004b1b10", 00:10:31.517 "is_configured": true, 00:10:31.517 "data_offset": 0, 00:10:31.517 "data_size": 65536 00:10:31.517 }, 00:10:31.517 { 00:10:31.517 "name": null, 00:10:31.517 "uuid": "1fe55737-5b6e-45aa-9a6f-1d03fa456849", 00:10:31.517 "is_configured": false, 00:10:31.517 "data_offset": 0, 00:10:31.517 "data_size": 65536 00:10:31.517 }, 00:10:31.517 { 00:10:31.517 "name": "BaseBdev3", 00:10:31.517 "uuid": "1913f736-12fc-4215-92d2-74a17fed184e", 00:10:31.517 "is_configured": true, 00:10:31.517 "data_offset": 0, 00:10:31.517 "data_size": 65536 00:10:31.517 } 00:10:31.517 ] 00:10:31.517 }' 00:10:31.517 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.517 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.776 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.776 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.776 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.776 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.776 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.776 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:31.776 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:31.776 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.776 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.776 [2024-09-30 12:27:43.623927] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.036 "name": "Existed_Raid", 00:10:32.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.036 "strip_size_kb": 0, 00:10:32.036 "state": "configuring", 00:10:32.036 "raid_level": "raid1", 00:10:32.036 "superblock": false, 00:10:32.036 "num_base_bdevs": 3, 00:10:32.036 "num_base_bdevs_discovered": 1, 00:10:32.036 "num_base_bdevs_operational": 3, 00:10:32.036 "base_bdevs_list": [ 00:10:32.036 { 00:10:32.036 "name": null, 00:10:32.036 "uuid": "f9e564c1-cf95-4e43-9523-f70a004b1b10", 00:10:32.036 "is_configured": false, 00:10:32.036 "data_offset": 0, 00:10:32.036 "data_size": 65536 00:10:32.036 }, 00:10:32.036 { 00:10:32.036 "name": null, 00:10:32.036 "uuid": "1fe55737-5b6e-45aa-9a6f-1d03fa456849", 00:10:32.036 "is_configured": false, 00:10:32.036 "data_offset": 0, 00:10:32.036 "data_size": 65536 00:10:32.036 }, 00:10:32.036 { 00:10:32.036 "name": "BaseBdev3", 00:10:32.036 "uuid": "1913f736-12fc-4215-92d2-74a17fed184e", 00:10:32.036 "is_configured": true, 00:10:32.036 "data_offset": 0, 00:10:32.036 "data_size": 65536 00:10:32.036 } 00:10:32.036 ] 00:10:32.036 }' 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.036 12:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.296 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.296 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.296 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.296 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.296 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.556 [2024-09-30 12:27:44.217244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.556 "name": "Existed_Raid", 00:10:32.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.556 "strip_size_kb": 0, 00:10:32.556 "state": "configuring", 00:10:32.556 "raid_level": "raid1", 00:10:32.556 "superblock": false, 00:10:32.556 "num_base_bdevs": 3, 00:10:32.556 "num_base_bdevs_discovered": 2, 00:10:32.556 "num_base_bdevs_operational": 3, 00:10:32.556 "base_bdevs_list": [ 00:10:32.556 { 00:10:32.556 "name": null, 00:10:32.556 "uuid": "f9e564c1-cf95-4e43-9523-f70a004b1b10", 00:10:32.556 "is_configured": false, 00:10:32.556 "data_offset": 0, 00:10:32.556 "data_size": 65536 00:10:32.556 }, 00:10:32.556 { 00:10:32.556 "name": "BaseBdev2", 00:10:32.556 "uuid": "1fe55737-5b6e-45aa-9a6f-1d03fa456849", 00:10:32.556 "is_configured": true, 00:10:32.556 "data_offset": 0, 00:10:32.556 "data_size": 65536 00:10:32.556 }, 00:10:32.556 { 00:10:32.556 "name": "BaseBdev3", 00:10:32.556 "uuid": "1913f736-12fc-4215-92d2-74a17fed184e", 00:10:32.556 "is_configured": true, 00:10:32.556 "data_offset": 0, 00:10:32.556 "data_size": 65536 00:10:32.556 } 00:10:32.556 ] 00:10:32.556 }' 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.556 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.816 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.816 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.816 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.816 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.816 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.816 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:32.816 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.816 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.816 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.816 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f9e564c1-cf95-4e43-9523-f70a004b1b10 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.076 [2024-09-30 12:27:44.776988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:33.076 [2024-09-30 12:27:44.777041] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:33.076 [2024-09-30 12:27:44.777051] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:33.076 [2024-09-30 12:27:44.777309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:33.076 [2024-09-30 12:27:44.777468] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:33.076 [2024-09-30 12:27:44.777484] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:33.076 [2024-09-30 12:27:44.777729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.076 NewBaseBdev 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.076 [ 00:10:33.076 { 00:10:33.076 "name": "NewBaseBdev", 00:10:33.076 "aliases": [ 00:10:33.076 "f9e564c1-cf95-4e43-9523-f70a004b1b10" 00:10:33.076 ], 00:10:33.076 "product_name": "Malloc disk", 00:10:33.076 "block_size": 512, 00:10:33.076 "num_blocks": 65536, 00:10:33.076 "uuid": "f9e564c1-cf95-4e43-9523-f70a004b1b10", 00:10:33.076 "assigned_rate_limits": { 00:10:33.076 "rw_ios_per_sec": 0, 00:10:33.076 "rw_mbytes_per_sec": 0, 00:10:33.076 "r_mbytes_per_sec": 0, 00:10:33.076 "w_mbytes_per_sec": 0 00:10:33.076 }, 00:10:33.076 "claimed": true, 00:10:33.076 "claim_type": "exclusive_write", 00:10:33.076 "zoned": false, 00:10:33.076 "supported_io_types": { 00:10:33.076 "read": true, 00:10:33.076 "write": true, 00:10:33.076 "unmap": true, 00:10:33.076 "flush": true, 00:10:33.076 "reset": true, 00:10:33.076 "nvme_admin": false, 00:10:33.076 "nvme_io": false, 00:10:33.076 "nvme_io_md": false, 00:10:33.076 "write_zeroes": true, 00:10:33.076 "zcopy": true, 00:10:33.076 "get_zone_info": false, 00:10:33.076 "zone_management": false, 00:10:33.076 "zone_append": false, 00:10:33.076 "compare": false, 00:10:33.076 "compare_and_write": false, 00:10:33.076 "abort": true, 00:10:33.076 "seek_hole": false, 00:10:33.076 "seek_data": false, 00:10:33.076 "copy": true, 00:10:33.076 "nvme_iov_md": false 00:10:33.076 }, 00:10:33.076 "memory_domains": [ 00:10:33.076 { 00:10:33.076 "dma_device_id": "system", 00:10:33.076 "dma_device_type": 1 00:10:33.076 }, 00:10:33.076 { 00:10:33.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.076 "dma_device_type": 2 00:10:33.076 } 00:10:33.076 ], 00:10:33.076 "driver_specific": {} 00:10:33.076 } 00:10:33.076 ] 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.076 "name": "Existed_Raid", 00:10:33.076 "uuid": "3f5d3e7c-1e67-4f7a-8d8f-841218ec5159", 00:10:33.076 "strip_size_kb": 0, 00:10:33.076 "state": "online", 00:10:33.076 "raid_level": "raid1", 00:10:33.076 "superblock": false, 00:10:33.076 "num_base_bdevs": 3, 00:10:33.076 "num_base_bdevs_discovered": 3, 00:10:33.076 "num_base_bdevs_operational": 3, 00:10:33.076 "base_bdevs_list": [ 00:10:33.076 { 00:10:33.076 "name": "NewBaseBdev", 00:10:33.076 "uuid": "f9e564c1-cf95-4e43-9523-f70a004b1b10", 00:10:33.076 "is_configured": true, 00:10:33.076 "data_offset": 0, 00:10:33.076 "data_size": 65536 00:10:33.076 }, 00:10:33.076 { 00:10:33.076 "name": "BaseBdev2", 00:10:33.076 "uuid": "1fe55737-5b6e-45aa-9a6f-1d03fa456849", 00:10:33.076 "is_configured": true, 00:10:33.076 "data_offset": 0, 00:10:33.076 "data_size": 65536 00:10:33.076 }, 00:10:33.076 { 00:10:33.076 "name": "BaseBdev3", 00:10:33.076 "uuid": "1913f736-12fc-4215-92d2-74a17fed184e", 00:10:33.076 "is_configured": true, 00:10:33.076 "data_offset": 0, 00:10:33.076 "data_size": 65536 00:10:33.076 } 00:10:33.076 ] 00:10:33.076 }' 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.076 12:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.655 [2024-09-30 12:27:45.280476] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.655 "name": "Existed_Raid", 00:10:33.655 "aliases": [ 00:10:33.655 "3f5d3e7c-1e67-4f7a-8d8f-841218ec5159" 00:10:33.655 ], 00:10:33.655 "product_name": "Raid Volume", 00:10:33.655 "block_size": 512, 00:10:33.655 "num_blocks": 65536, 00:10:33.655 "uuid": "3f5d3e7c-1e67-4f7a-8d8f-841218ec5159", 00:10:33.655 "assigned_rate_limits": { 00:10:33.655 "rw_ios_per_sec": 0, 00:10:33.655 "rw_mbytes_per_sec": 0, 00:10:33.655 "r_mbytes_per_sec": 0, 00:10:33.655 "w_mbytes_per_sec": 0 00:10:33.655 }, 00:10:33.655 "claimed": false, 00:10:33.655 "zoned": false, 00:10:33.655 "supported_io_types": { 00:10:33.655 "read": true, 00:10:33.655 "write": true, 00:10:33.655 "unmap": false, 00:10:33.655 "flush": false, 00:10:33.655 "reset": true, 00:10:33.655 "nvme_admin": false, 00:10:33.655 "nvme_io": false, 00:10:33.655 "nvme_io_md": false, 00:10:33.655 "write_zeroes": true, 00:10:33.655 "zcopy": false, 00:10:33.655 "get_zone_info": false, 00:10:33.655 "zone_management": false, 00:10:33.655 "zone_append": false, 00:10:33.655 "compare": false, 00:10:33.655 "compare_and_write": false, 00:10:33.655 "abort": false, 00:10:33.655 "seek_hole": false, 00:10:33.655 "seek_data": false, 00:10:33.655 "copy": false, 00:10:33.655 "nvme_iov_md": false 00:10:33.655 }, 00:10:33.655 "memory_domains": [ 00:10:33.655 { 00:10:33.655 "dma_device_id": "system", 00:10:33.655 "dma_device_type": 1 00:10:33.655 }, 00:10:33.655 { 00:10:33.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.655 "dma_device_type": 2 00:10:33.655 }, 00:10:33.655 { 00:10:33.655 "dma_device_id": "system", 00:10:33.655 "dma_device_type": 1 00:10:33.655 }, 00:10:33.655 { 00:10:33.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.655 "dma_device_type": 2 00:10:33.655 }, 00:10:33.655 { 00:10:33.655 "dma_device_id": "system", 00:10:33.655 "dma_device_type": 1 00:10:33.655 }, 00:10:33.655 { 00:10:33.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.655 "dma_device_type": 2 00:10:33.655 } 00:10:33.655 ], 00:10:33.655 "driver_specific": { 00:10:33.655 "raid": { 00:10:33.655 "uuid": "3f5d3e7c-1e67-4f7a-8d8f-841218ec5159", 00:10:33.655 "strip_size_kb": 0, 00:10:33.655 "state": "online", 00:10:33.655 "raid_level": "raid1", 00:10:33.655 "superblock": false, 00:10:33.655 "num_base_bdevs": 3, 00:10:33.655 "num_base_bdevs_discovered": 3, 00:10:33.655 "num_base_bdevs_operational": 3, 00:10:33.655 "base_bdevs_list": [ 00:10:33.655 { 00:10:33.655 "name": "NewBaseBdev", 00:10:33.655 "uuid": "f9e564c1-cf95-4e43-9523-f70a004b1b10", 00:10:33.655 "is_configured": true, 00:10:33.655 "data_offset": 0, 00:10:33.655 "data_size": 65536 00:10:33.655 }, 00:10:33.655 { 00:10:33.655 "name": "BaseBdev2", 00:10:33.655 "uuid": "1fe55737-5b6e-45aa-9a6f-1d03fa456849", 00:10:33.655 "is_configured": true, 00:10:33.655 "data_offset": 0, 00:10:33.655 "data_size": 65536 00:10:33.655 }, 00:10:33.655 { 00:10:33.655 "name": "BaseBdev3", 00:10:33.655 "uuid": "1913f736-12fc-4215-92d2-74a17fed184e", 00:10:33.655 "is_configured": true, 00:10:33.655 "data_offset": 0, 00:10:33.655 "data_size": 65536 00:10:33.655 } 00:10:33.655 ] 00:10:33.655 } 00:10:33.655 } 00:10:33.655 }' 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:33.655 BaseBdev2 00:10:33.655 BaseBdev3' 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.655 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.656 [2024-09-30 12:27:45.539798] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.656 [2024-09-30 12:27:45.539831] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.656 [2024-09-30 12:27:45.539897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.656 [2024-09-30 12:27:45.540173] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.656 [2024-09-30 12:27:45.540183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67283 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67283 ']' 00:10:33.656 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67283 00:10:33.941 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:33.941 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:33.941 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67283 00:10:33.941 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:33.941 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:33.941 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67283' 00:10:33.941 killing process with pid 67283 00:10:33.941 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67283 00:10:33.941 [2024-09-30 12:27:45.579718] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.941 12:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67283 00:10:34.221 [2024-09-30 12:27:45.876499] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.598 12:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:35.598 00:10:35.598 real 0m10.516s 00:10:35.598 user 0m16.625s 00:10:35.598 sys 0m1.793s 00:10:35.598 12:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.598 12:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.599 ************************************ 00:10:35.599 END TEST raid_state_function_test 00:10:35.599 ************************************ 00:10:35.599 12:27:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:35.599 12:27:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:35.599 12:27:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.599 12:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.599 ************************************ 00:10:35.599 START TEST raid_state_function_test_sb 00:10:35.599 ************************************ 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:35.599 Process raid pid: 67904 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67904 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67904' 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67904 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 67904 ']' 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.599 12:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.599 [2024-09-30 12:27:47.277227] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:35.599 [2024-09-30 12:27:47.277408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.599 [2024-09-30 12:27:47.437170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.859 [2024-09-30 12:27:47.632431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.119 [2024-09-30 12:27:47.830695] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.119 [2024-09-30 12:27:47.830826] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.379 [2024-09-30 12:27:48.102994] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.379 [2024-09-30 12:27:48.103103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.379 [2024-09-30 12:27:48.103143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.379 [2024-09-30 12:27:48.103172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.379 [2024-09-30 12:27:48.103225] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.379 [2024-09-30 12:27:48.103286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.379 "name": "Existed_Raid", 00:10:36.379 "uuid": "378e0201-a0b0-4d41-b1aa-29ce4f673749", 00:10:36.379 "strip_size_kb": 0, 00:10:36.379 "state": "configuring", 00:10:36.379 "raid_level": "raid1", 00:10:36.379 "superblock": true, 00:10:36.379 "num_base_bdevs": 3, 00:10:36.379 "num_base_bdevs_discovered": 0, 00:10:36.379 "num_base_bdevs_operational": 3, 00:10:36.379 "base_bdevs_list": [ 00:10:36.379 { 00:10:36.379 "name": "BaseBdev1", 00:10:36.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.379 "is_configured": false, 00:10:36.379 "data_offset": 0, 00:10:36.379 "data_size": 0 00:10:36.379 }, 00:10:36.379 { 00:10:36.379 "name": "BaseBdev2", 00:10:36.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.379 "is_configured": false, 00:10:36.379 "data_offset": 0, 00:10:36.379 "data_size": 0 00:10:36.379 }, 00:10:36.379 { 00:10:36.379 "name": "BaseBdev3", 00:10:36.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.379 "is_configured": false, 00:10:36.379 "data_offset": 0, 00:10:36.379 "data_size": 0 00:10:36.379 } 00:10:36.379 ] 00:10:36.379 }' 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.379 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.639 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.639 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.639 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.639 [2024-09-30 12:27:48.530180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.639 [2024-09-30 12:27:48.530221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.899 [2024-09-30 12:27:48.542189] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.899 [2024-09-30 12:27:48.542237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.899 [2024-09-30 12:27:48.542247] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.899 [2024-09-30 12:27:48.542259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.899 [2024-09-30 12:27:48.542267] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.899 [2024-09-30 12:27:48.542278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.899 [2024-09-30 12:27:48.601290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.899 BaseBdev1 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:36.899 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.900 [ 00:10:36.900 { 00:10:36.900 "name": "BaseBdev1", 00:10:36.900 "aliases": [ 00:10:36.900 "791de196-c5c3-452b-858d-cb0683daec7d" 00:10:36.900 ], 00:10:36.900 "product_name": "Malloc disk", 00:10:36.900 "block_size": 512, 00:10:36.900 "num_blocks": 65536, 00:10:36.900 "uuid": "791de196-c5c3-452b-858d-cb0683daec7d", 00:10:36.900 "assigned_rate_limits": { 00:10:36.900 "rw_ios_per_sec": 0, 00:10:36.900 "rw_mbytes_per_sec": 0, 00:10:36.900 "r_mbytes_per_sec": 0, 00:10:36.900 "w_mbytes_per_sec": 0 00:10:36.900 }, 00:10:36.900 "claimed": true, 00:10:36.900 "claim_type": "exclusive_write", 00:10:36.900 "zoned": false, 00:10:36.900 "supported_io_types": { 00:10:36.900 "read": true, 00:10:36.900 "write": true, 00:10:36.900 "unmap": true, 00:10:36.900 "flush": true, 00:10:36.900 "reset": true, 00:10:36.900 "nvme_admin": false, 00:10:36.900 "nvme_io": false, 00:10:36.900 "nvme_io_md": false, 00:10:36.900 "write_zeroes": true, 00:10:36.900 "zcopy": true, 00:10:36.900 "get_zone_info": false, 00:10:36.900 "zone_management": false, 00:10:36.900 "zone_append": false, 00:10:36.900 "compare": false, 00:10:36.900 "compare_and_write": false, 00:10:36.900 "abort": true, 00:10:36.900 "seek_hole": false, 00:10:36.900 "seek_data": false, 00:10:36.900 "copy": true, 00:10:36.900 "nvme_iov_md": false 00:10:36.900 }, 00:10:36.900 "memory_domains": [ 00:10:36.900 { 00:10:36.900 "dma_device_id": "system", 00:10:36.900 "dma_device_type": 1 00:10:36.900 }, 00:10:36.900 { 00:10:36.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.900 "dma_device_type": 2 00:10:36.900 } 00:10:36.900 ], 00:10:36.900 "driver_specific": {} 00:10:36.900 } 00:10:36.900 ] 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.900 "name": "Existed_Raid", 00:10:36.900 "uuid": "279fb433-f96f-4e3b-9fbd-39621837a481", 00:10:36.900 "strip_size_kb": 0, 00:10:36.900 "state": "configuring", 00:10:36.900 "raid_level": "raid1", 00:10:36.900 "superblock": true, 00:10:36.900 "num_base_bdevs": 3, 00:10:36.900 "num_base_bdevs_discovered": 1, 00:10:36.900 "num_base_bdevs_operational": 3, 00:10:36.900 "base_bdevs_list": [ 00:10:36.900 { 00:10:36.900 "name": "BaseBdev1", 00:10:36.900 "uuid": "791de196-c5c3-452b-858d-cb0683daec7d", 00:10:36.900 "is_configured": true, 00:10:36.900 "data_offset": 2048, 00:10:36.900 "data_size": 63488 00:10:36.900 }, 00:10:36.900 { 00:10:36.900 "name": "BaseBdev2", 00:10:36.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.900 "is_configured": false, 00:10:36.900 "data_offset": 0, 00:10:36.900 "data_size": 0 00:10:36.900 }, 00:10:36.900 { 00:10:36.900 "name": "BaseBdev3", 00:10:36.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.900 "is_configured": false, 00:10:36.900 "data_offset": 0, 00:10:36.900 "data_size": 0 00:10:36.900 } 00:10:36.900 ] 00:10:36.900 }' 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.900 12:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.161 [2024-09-30 12:27:49.028612] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.161 [2024-09-30 12:27:49.028656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.161 [2024-09-30 12:27:49.040646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.161 [2024-09-30 12:27:49.042452] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.161 [2024-09-30 12:27:49.042545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.161 [2024-09-30 12:27:49.042561] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.161 [2024-09-30 12:27:49.042572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.161 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.420 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.420 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.420 "name": "Existed_Raid", 00:10:37.420 "uuid": "37d7938c-a1be-4107-8bfd-1114d41ccfee", 00:10:37.420 "strip_size_kb": 0, 00:10:37.420 "state": "configuring", 00:10:37.420 "raid_level": "raid1", 00:10:37.420 "superblock": true, 00:10:37.420 "num_base_bdevs": 3, 00:10:37.420 "num_base_bdevs_discovered": 1, 00:10:37.420 "num_base_bdevs_operational": 3, 00:10:37.420 "base_bdevs_list": [ 00:10:37.420 { 00:10:37.420 "name": "BaseBdev1", 00:10:37.420 "uuid": "791de196-c5c3-452b-858d-cb0683daec7d", 00:10:37.420 "is_configured": true, 00:10:37.420 "data_offset": 2048, 00:10:37.420 "data_size": 63488 00:10:37.420 }, 00:10:37.420 { 00:10:37.420 "name": "BaseBdev2", 00:10:37.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.420 "is_configured": false, 00:10:37.420 "data_offset": 0, 00:10:37.420 "data_size": 0 00:10:37.420 }, 00:10:37.420 { 00:10:37.420 "name": "BaseBdev3", 00:10:37.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.420 "is_configured": false, 00:10:37.420 "data_offset": 0, 00:10:37.420 "data_size": 0 00:10:37.420 } 00:10:37.420 ] 00:10:37.420 }' 00:10:37.420 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.420 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.679 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.679 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.679 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.679 [2024-09-30 12:27:49.470660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.679 BaseBdev2 00:10:37.679 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.679 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:37.679 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.680 [ 00:10:37.680 { 00:10:37.680 "name": "BaseBdev2", 00:10:37.680 "aliases": [ 00:10:37.680 "e596d164-cc68-453a-b952-f180527cb8cf" 00:10:37.680 ], 00:10:37.680 "product_name": "Malloc disk", 00:10:37.680 "block_size": 512, 00:10:37.680 "num_blocks": 65536, 00:10:37.680 "uuid": "e596d164-cc68-453a-b952-f180527cb8cf", 00:10:37.680 "assigned_rate_limits": { 00:10:37.680 "rw_ios_per_sec": 0, 00:10:37.680 "rw_mbytes_per_sec": 0, 00:10:37.680 "r_mbytes_per_sec": 0, 00:10:37.680 "w_mbytes_per_sec": 0 00:10:37.680 }, 00:10:37.680 "claimed": true, 00:10:37.680 "claim_type": "exclusive_write", 00:10:37.680 "zoned": false, 00:10:37.680 "supported_io_types": { 00:10:37.680 "read": true, 00:10:37.680 "write": true, 00:10:37.680 "unmap": true, 00:10:37.680 "flush": true, 00:10:37.680 "reset": true, 00:10:37.680 "nvme_admin": false, 00:10:37.680 "nvme_io": false, 00:10:37.680 "nvme_io_md": false, 00:10:37.680 "write_zeroes": true, 00:10:37.680 "zcopy": true, 00:10:37.680 "get_zone_info": false, 00:10:37.680 "zone_management": false, 00:10:37.680 "zone_append": false, 00:10:37.680 "compare": false, 00:10:37.680 "compare_and_write": false, 00:10:37.680 "abort": true, 00:10:37.680 "seek_hole": false, 00:10:37.680 "seek_data": false, 00:10:37.680 "copy": true, 00:10:37.680 "nvme_iov_md": false 00:10:37.680 }, 00:10:37.680 "memory_domains": [ 00:10:37.680 { 00:10:37.680 "dma_device_id": "system", 00:10:37.680 "dma_device_type": 1 00:10:37.680 }, 00:10:37.680 { 00:10:37.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.680 "dma_device_type": 2 00:10:37.680 } 00:10:37.680 ], 00:10:37.680 "driver_specific": {} 00:10:37.680 } 00:10:37.680 ] 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.680 "name": "Existed_Raid", 00:10:37.680 "uuid": "37d7938c-a1be-4107-8bfd-1114d41ccfee", 00:10:37.680 "strip_size_kb": 0, 00:10:37.680 "state": "configuring", 00:10:37.680 "raid_level": "raid1", 00:10:37.680 "superblock": true, 00:10:37.680 "num_base_bdevs": 3, 00:10:37.680 "num_base_bdevs_discovered": 2, 00:10:37.680 "num_base_bdevs_operational": 3, 00:10:37.680 "base_bdevs_list": [ 00:10:37.680 { 00:10:37.680 "name": "BaseBdev1", 00:10:37.680 "uuid": "791de196-c5c3-452b-858d-cb0683daec7d", 00:10:37.680 "is_configured": true, 00:10:37.680 "data_offset": 2048, 00:10:37.680 "data_size": 63488 00:10:37.680 }, 00:10:37.680 { 00:10:37.680 "name": "BaseBdev2", 00:10:37.680 "uuid": "e596d164-cc68-453a-b952-f180527cb8cf", 00:10:37.680 "is_configured": true, 00:10:37.680 "data_offset": 2048, 00:10:37.680 "data_size": 63488 00:10:37.680 }, 00:10:37.680 { 00:10:37.680 "name": "BaseBdev3", 00:10:37.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.680 "is_configured": false, 00:10:37.680 "data_offset": 0, 00:10:37.680 "data_size": 0 00:10:37.680 } 00:10:37.680 ] 00:10:37.680 }' 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.680 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.250 [2024-09-30 12:27:49.967720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.250 [2024-09-30 12:27:49.968086] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:38.250 [2024-09-30 12:27:49.968157] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:38.250 [2024-09-30 12:27:49.968451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:38.250 BaseBdev3 00:10:38.250 [2024-09-30 12:27:49.968652] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:38.250 [2024-09-30 12:27:49.968663] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:38.250 [2024-09-30 12:27:49.968832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.250 [ 00:10:38.250 { 00:10:38.250 "name": "BaseBdev3", 00:10:38.250 "aliases": [ 00:10:38.250 "9464f219-b86a-4e7e-8c85-8885d056f372" 00:10:38.250 ], 00:10:38.250 "product_name": "Malloc disk", 00:10:38.250 "block_size": 512, 00:10:38.250 "num_blocks": 65536, 00:10:38.250 "uuid": "9464f219-b86a-4e7e-8c85-8885d056f372", 00:10:38.250 "assigned_rate_limits": { 00:10:38.250 "rw_ios_per_sec": 0, 00:10:38.250 "rw_mbytes_per_sec": 0, 00:10:38.250 "r_mbytes_per_sec": 0, 00:10:38.250 "w_mbytes_per_sec": 0 00:10:38.250 }, 00:10:38.250 "claimed": true, 00:10:38.250 "claim_type": "exclusive_write", 00:10:38.250 "zoned": false, 00:10:38.250 "supported_io_types": { 00:10:38.250 "read": true, 00:10:38.250 "write": true, 00:10:38.250 "unmap": true, 00:10:38.250 "flush": true, 00:10:38.250 "reset": true, 00:10:38.250 "nvme_admin": false, 00:10:38.250 "nvme_io": false, 00:10:38.250 "nvme_io_md": false, 00:10:38.250 "write_zeroes": true, 00:10:38.250 "zcopy": true, 00:10:38.250 "get_zone_info": false, 00:10:38.250 "zone_management": false, 00:10:38.250 "zone_append": false, 00:10:38.250 "compare": false, 00:10:38.250 "compare_and_write": false, 00:10:38.250 "abort": true, 00:10:38.250 "seek_hole": false, 00:10:38.250 "seek_data": false, 00:10:38.250 "copy": true, 00:10:38.250 "nvme_iov_md": false 00:10:38.250 }, 00:10:38.250 "memory_domains": [ 00:10:38.250 { 00:10:38.250 "dma_device_id": "system", 00:10:38.250 "dma_device_type": 1 00:10:38.250 }, 00:10:38.250 { 00:10:38.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.250 "dma_device_type": 2 00:10:38.250 } 00:10:38.250 ], 00:10:38.250 "driver_specific": {} 00:10:38.250 } 00:10:38.250 ] 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.250 12:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.250 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.250 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.250 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.250 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.250 "name": "Existed_Raid", 00:10:38.250 "uuid": "37d7938c-a1be-4107-8bfd-1114d41ccfee", 00:10:38.250 "strip_size_kb": 0, 00:10:38.250 "state": "online", 00:10:38.250 "raid_level": "raid1", 00:10:38.250 "superblock": true, 00:10:38.250 "num_base_bdevs": 3, 00:10:38.250 "num_base_bdevs_discovered": 3, 00:10:38.250 "num_base_bdevs_operational": 3, 00:10:38.250 "base_bdevs_list": [ 00:10:38.250 { 00:10:38.250 "name": "BaseBdev1", 00:10:38.250 "uuid": "791de196-c5c3-452b-858d-cb0683daec7d", 00:10:38.250 "is_configured": true, 00:10:38.250 "data_offset": 2048, 00:10:38.250 "data_size": 63488 00:10:38.250 }, 00:10:38.250 { 00:10:38.250 "name": "BaseBdev2", 00:10:38.250 "uuid": "e596d164-cc68-453a-b952-f180527cb8cf", 00:10:38.250 "is_configured": true, 00:10:38.250 "data_offset": 2048, 00:10:38.250 "data_size": 63488 00:10:38.250 }, 00:10:38.250 { 00:10:38.250 "name": "BaseBdev3", 00:10:38.250 "uuid": "9464f219-b86a-4e7e-8c85-8885d056f372", 00:10:38.250 "is_configured": true, 00:10:38.250 "data_offset": 2048, 00:10:38.250 "data_size": 63488 00:10:38.250 } 00:10:38.250 ] 00:10:38.250 }' 00:10:38.250 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.250 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 [2024-09-30 12:27:50.431303] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.820 "name": "Existed_Raid", 00:10:38.820 "aliases": [ 00:10:38.820 "37d7938c-a1be-4107-8bfd-1114d41ccfee" 00:10:38.820 ], 00:10:38.820 "product_name": "Raid Volume", 00:10:38.820 "block_size": 512, 00:10:38.820 "num_blocks": 63488, 00:10:38.820 "uuid": "37d7938c-a1be-4107-8bfd-1114d41ccfee", 00:10:38.820 "assigned_rate_limits": { 00:10:38.820 "rw_ios_per_sec": 0, 00:10:38.820 "rw_mbytes_per_sec": 0, 00:10:38.820 "r_mbytes_per_sec": 0, 00:10:38.820 "w_mbytes_per_sec": 0 00:10:38.820 }, 00:10:38.820 "claimed": false, 00:10:38.820 "zoned": false, 00:10:38.820 "supported_io_types": { 00:10:38.820 "read": true, 00:10:38.820 "write": true, 00:10:38.820 "unmap": false, 00:10:38.820 "flush": false, 00:10:38.820 "reset": true, 00:10:38.820 "nvme_admin": false, 00:10:38.820 "nvme_io": false, 00:10:38.820 "nvme_io_md": false, 00:10:38.820 "write_zeroes": true, 00:10:38.820 "zcopy": false, 00:10:38.820 "get_zone_info": false, 00:10:38.820 "zone_management": false, 00:10:38.820 "zone_append": false, 00:10:38.820 "compare": false, 00:10:38.820 "compare_and_write": false, 00:10:38.820 "abort": false, 00:10:38.820 "seek_hole": false, 00:10:38.820 "seek_data": false, 00:10:38.820 "copy": false, 00:10:38.820 "nvme_iov_md": false 00:10:38.820 }, 00:10:38.820 "memory_domains": [ 00:10:38.820 { 00:10:38.820 "dma_device_id": "system", 00:10:38.820 "dma_device_type": 1 00:10:38.820 }, 00:10:38.820 { 00:10:38.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.820 "dma_device_type": 2 00:10:38.820 }, 00:10:38.820 { 00:10:38.820 "dma_device_id": "system", 00:10:38.820 "dma_device_type": 1 00:10:38.820 }, 00:10:38.820 { 00:10:38.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.820 "dma_device_type": 2 00:10:38.820 }, 00:10:38.820 { 00:10:38.820 "dma_device_id": "system", 00:10:38.820 "dma_device_type": 1 00:10:38.820 }, 00:10:38.820 { 00:10:38.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.820 "dma_device_type": 2 00:10:38.820 } 00:10:38.820 ], 00:10:38.820 "driver_specific": { 00:10:38.820 "raid": { 00:10:38.820 "uuid": "37d7938c-a1be-4107-8bfd-1114d41ccfee", 00:10:38.820 "strip_size_kb": 0, 00:10:38.820 "state": "online", 00:10:38.820 "raid_level": "raid1", 00:10:38.820 "superblock": true, 00:10:38.820 "num_base_bdevs": 3, 00:10:38.820 "num_base_bdevs_discovered": 3, 00:10:38.820 "num_base_bdevs_operational": 3, 00:10:38.820 "base_bdevs_list": [ 00:10:38.820 { 00:10:38.820 "name": "BaseBdev1", 00:10:38.820 "uuid": "791de196-c5c3-452b-858d-cb0683daec7d", 00:10:38.820 "is_configured": true, 00:10:38.820 "data_offset": 2048, 00:10:38.820 "data_size": 63488 00:10:38.820 }, 00:10:38.820 { 00:10:38.820 "name": "BaseBdev2", 00:10:38.820 "uuid": "e596d164-cc68-453a-b952-f180527cb8cf", 00:10:38.820 "is_configured": true, 00:10:38.820 "data_offset": 2048, 00:10:38.820 "data_size": 63488 00:10:38.820 }, 00:10:38.820 { 00:10:38.820 "name": "BaseBdev3", 00:10:38.820 "uuid": "9464f219-b86a-4e7e-8c85-8885d056f372", 00:10:38.820 "is_configured": true, 00:10:38.820 "data_offset": 2048, 00:10:38.820 "data_size": 63488 00:10:38.820 } 00:10:38.820 ] 00:10:38.820 } 00:10:38.820 } 00:10:38.820 }' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:38.820 BaseBdev2 00:10:38.820 BaseBdev3' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.820 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 [2024-09-30 12:27:50.706519] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.080 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.080 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:39.080 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:39.080 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.080 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.081 "name": "Existed_Raid", 00:10:39.081 "uuid": "37d7938c-a1be-4107-8bfd-1114d41ccfee", 00:10:39.081 "strip_size_kb": 0, 00:10:39.081 "state": "online", 00:10:39.081 "raid_level": "raid1", 00:10:39.081 "superblock": true, 00:10:39.081 "num_base_bdevs": 3, 00:10:39.081 "num_base_bdevs_discovered": 2, 00:10:39.081 "num_base_bdevs_operational": 2, 00:10:39.081 "base_bdevs_list": [ 00:10:39.081 { 00:10:39.081 "name": null, 00:10:39.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.081 "is_configured": false, 00:10:39.081 "data_offset": 0, 00:10:39.081 "data_size": 63488 00:10:39.081 }, 00:10:39.081 { 00:10:39.081 "name": "BaseBdev2", 00:10:39.081 "uuid": "e596d164-cc68-453a-b952-f180527cb8cf", 00:10:39.081 "is_configured": true, 00:10:39.081 "data_offset": 2048, 00:10:39.081 "data_size": 63488 00:10:39.081 }, 00:10:39.081 { 00:10:39.081 "name": "BaseBdev3", 00:10:39.081 "uuid": "9464f219-b86a-4e7e-8c85-8885d056f372", 00:10:39.081 "is_configured": true, 00:10:39.081 "data_offset": 2048, 00:10:39.081 "data_size": 63488 00:10:39.081 } 00:10:39.081 ] 00:10:39.081 }' 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.081 12:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.340 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:39.340 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.340 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.340 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.340 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.340 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.340 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.340 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.340 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.340 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:39.341 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.341 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.341 [2024-09-30 12:27:51.228430] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.601 [2024-09-30 12:27:51.378489] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.601 [2024-09-30 12:27:51.378666] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.601 [2024-09-30 12:27:51.473831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.601 [2024-09-30 12:27:51.473965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.601 [2024-09-30 12:27:51.474007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.601 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.861 BaseBdev2 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.861 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.861 [ 00:10:39.861 { 00:10:39.861 "name": "BaseBdev2", 00:10:39.861 "aliases": [ 00:10:39.861 "b94dd4bd-6281-47c9-843f-f7e7a2812a19" 00:10:39.861 ], 00:10:39.861 "product_name": "Malloc disk", 00:10:39.861 "block_size": 512, 00:10:39.861 "num_blocks": 65536, 00:10:39.861 "uuid": "b94dd4bd-6281-47c9-843f-f7e7a2812a19", 00:10:39.862 "assigned_rate_limits": { 00:10:39.862 "rw_ios_per_sec": 0, 00:10:39.862 "rw_mbytes_per_sec": 0, 00:10:39.862 "r_mbytes_per_sec": 0, 00:10:39.862 "w_mbytes_per_sec": 0 00:10:39.862 }, 00:10:39.862 "claimed": false, 00:10:39.862 "zoned": false, 00:10:39.862 "supported_io_types": { 00:10:39.862 "read": true, 00:10:39.862 "write": true, 00:10:39.862 "unmap": true, 00:10:39.862 "flush": true, 00:10:39.862 "reset": true, 00:10:39.862 "nvme_admin": false, 00:10:39.862 "nvme_io": false, 00:10:39.862 "nvme_io_md": false, 00:10:39.862 "write_zeroes": true, 00:10:39.862 "zcopy": true, 00:10:39.862 "get_zone_info": false, 00:10:39.862 "zone_management": false, 00:10:39.862 "zone_append": false, 00:10:39.862 "compare": false, 00:10:39.862 "compare_and_write": false, 00:10:39.862 "abort": true, 00:10:39.862 "seek_hole": false, 00:10:39.862 "seek_data": false, 00:10:39.862 "copy": true, 00:10:39.862 "nvme_iov_md": false 00:10:39.862 }, 00:10:39.862 "memory_domains": [ 00:10:39.862 { 00:10:39.862 "dma_device_id": "system", 00:10:39.862 "dma_device_type": 1 00:10:39.862 }, 00:10:39.862 { 00:10:39.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.862 "dma_device_type": 2 00:10:39.862 } 00:10:39.862 ], 00:10:39.862 "driver_specific": {} 00:10:39.862 } 00:10:39.862 ] 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.862 BaseBdev3 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.862 [ 00:10:39.862 { 00:10:39.862 "name": "BaseBdev3", 00:10:39.862 "aliases": [ 00:10:39.862 "275213b0-8911-4e36-8e4e-85b3aa3a7f98" 00:10:39.862 ], 00:10:39.862 "product_name": "Malloc disk", 00:10:39.862 "block_size": 512, 00:10:39.862 "num_blocks": 65536, 00:10:39.862 "uuid": "275213b0-8911-4e36-8e4e-85b3aa3a7f98", 00:10:39.862 "assigned_rate_limits": { 00:10:39.862 "rw_ios_per_sec": 0, 00:10:39.862 "rw_mbytes_per_sec": 0, 00:10:39.862 "r_mbytes_per_sec": 0, 00:10:39.862 "w_mbytes_per_sec": 0 00:10:39.862 }, 00:10:39.862 "claimed": false, 00:10:39.862 "zoned": false, 00:10:39.862 "supported_io_types": { 00:10:39.862 "read": true, 00:10:39.862 "write": true, 00:10:39.862 "unmap": true, 00:10:39.862 "flush": true, 00:10:39.862 "reset": true, 00:10:39.862 "nvme_admin": false, 00:10:39.862 "nvme_io": false, 00:10:39.862 "nvme_io_md": false, 00:10:39.862 "write_zeroes": true, 00:10:39.862 "zcopy": true, 00:10:39.862 "get_zone_info": false, 00:10:39.862 "zone_management": false, 00:10:39.862 "zone_append": false, 00:10:39.862 "compare": false, 00:10:39.862 "compare_and_write": false, 00:10:39.862 "abort": true, 00:10:39.862 "seek_hole": false, 00:10:39.862 "seek_data": false, 00:10:39.862 "copy": true, 00:10:39.862 "nvme_iov_md": false 00:10:39.862 }, 00:10:39.862 "memory_domains": [ 00:10:39.862 { 00:10:39.862 "dma_device_id": "system", 00:10:39.862 "dma_device_type": 1 00:10:39.862 }, 00:10:39.862 { 00:10:39.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.862 "dma_device_type": 2 00:10:39.862 } 00:10:39.862 ], 00:10:39.862 "driver_specific": {} 00:10:39.862 } 00:10:39.862 ] 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.862 [2024-09-30 12:27:51.688912] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.862 [2024-09-30 12:27:51.689012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.862 [2024-09-30 12:27:51.689054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.862 [2024-09-30 12:27:51.690938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.862 "name": "Existed_Raid", 00:10:39.862 "uuid": "a914afd6-936f-4cfd-aa3f-ccd2f7911954", 00:10:39.862 "strip_size_kb": 0, 00:10:39.862 "state": "configuring", 00:10:39.862 "raid_level": "raid1", 00:10:39.862 "superblock": true, 00:10:39.862 "num_base_bdevs": 3, 00:10:39.862 "num_base_bdevs_discovered": 2, 00:10:39.862 "num_base_bdevs_operational": 3, 00:10:39.862 "base_bdevs_list": [ 00:10:39.862 { 00:10:39.862 "name": "BaseBdev1", 00:10:39.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.862 "is_configured": false, 00:10:39.862 "data_offset": 0, 00:10:39.862 "data_size": 0 00:10:39.862 }, 00:10:39.862 { 00:10:39.862 "name": "BaseBdev2", 00:10:39.862 "uuid": "b94dd4bd-6281-47c9-843f-f7e7a2812a19", 00:10:39.862 "is_configured": true, 00:10:39.862 "data_offset": 2048, 00:10:39.862 "data_size": 63488 00:10:39.862 }, 00:10:39.862 { 00:10:39.862 "name": "BaseBdev3", 00:10:39.862 "uuid": "275213b0-8911-4e36-8e4e-85b3aa3a7f98", 00:10:39.862 "is_configured": true, 00:10:39.862 "data_offset": 2048, 00:10:39.862 "data_size": 63488 00:10:39.862 } 00:10:39.862 ] 00:10:39.862 }' 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.862 12:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.432 [2024-09-30 12:27:52.144180] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.432 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.433 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.433 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.433 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.433 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.433 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.433 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.433 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.433 "name": "Existed_Raid", 00:10:40.433 "uuid": "a914afd6-936f-4cfd-aa3f-ccd2f7911954", 00:10:40.433 "strip_size_kb": 0, 00:10:40.433 "state": "configuring", 00:10:40.433 "raid_level": "raid1", 00:10:40.433 "superblock": true, 00:10:40.433 "num_base_bdevs": 3, 00:10:40.433 "num_base_bdevs_discovered": 1, 00:10:40.433 "num_base_bdevs_operational": 3, 00:10:40.433 "base_bdevs_list": [ 00:10:40.433 { 00:10:40.433 "name": "BaseBdev1", 00:10:40.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.433 "is_configured": false, 00:10:40.433 "data_offset": 0, 00:10:40.433 "data_size": 0 00:10:40.433 }, 00:10:40.433 { 00:10:40.433 "name": null, 00:10:40.433 "uuid": "b94dd4bd-6281-47c9-843f-f7e7a2812a19", 00:10:40.433 "is_configured": false, 00:10:40.433 "data_offset": 0, 00:10:40.433 "data_size": 63488 00:10:40.433 }, 00:10:40.433 { 00:10:40.433 "name": "BaseBdev3", 00:10:40.433 "uuid": "275213b0-8911-4e36-8e4e-85b3aa3a7f98", 00:10:40.433 "is_configured": true, 00:10:40.433 "data_offset": 2048, 00:10:40.433 "data_size": 63488 00:10:40.433 } 00:10:40.433 ] 00:10:40.433 }' 00:10:40.433 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.433 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.695 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.696 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.696 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.696 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.696 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.956 [2024-09-30 12:27:52.647992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.956 BaseBdev1 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.956 [ 00:10:40.956 { 00:10:40.956 "name": "BaseBdev1", 00:10:40.956 "aliases": [ 00:10:40.956 "67aea97c-9b4f-43c2-bcda-4d75d6056700" 00:10:40.956 ], 00:10:40.956 "product_name": "Malloc disk", 00:10:40.956 "block_size": 512, 00:10:40.956 "num_blocks": 65536, 00:10:40.956 "uuid": "67aea97c-9b4f-43c2-bcda-4d75d6056700", 00:10:40.956 "assigned_rate_limits": { 00:10:40.956 "rw_ios_per_sec": 0, 00:10:40.956 "rw_mbytes_per_sec": 0, 00:10:40.956 "r_mbytes_per_sec": 0, 00:10:40.956 "w_mbytes_per_sec": 0 00:10:40.956 }, 00:10:40.956 "claimed": true, 00:10:40.956 "claim_type": "exclusive_write", 00:10:40.956 "zoned": false, 00:10:40.956 "supported_io_types": { 00:10:40.956 "read": true, 00:10:40.956 "write": true, 00:10:40.956 "unmap": true, 00:10:40.956 "flush": true, 00:10:40.956 "reset": true, 00:10:40.956 "nvme_admin": false, 00:10:40.956 "nvme_io": false, 00:10:40.956 "nvme_io_md": false, 00:10:40.956 "write_zeroes": true, 00:10:40.956 "zcopy": true, 00:10:40.956 "get_zone_info": false, 00:10:40.956 "zone_management": false, 00:10:40.956 "zone_append": false, 00:10:40.956 "compare": false, 00:10:40.956 "compare_and_write": false, 00:10:40.956 "abort": true, 00:10:40.956 "seek_hole": false, 00:10:40.956 "seek_data": false, 00:10:40.956 "copy": true, 00:10:40.956 "nvme_iov_md": false 00:10:40.956 }, 00:10:40.956 "memory_domains": [ 00:10:40.956 { 00:10:40.956 "dma_device_id": "system", 00:10:40.956 "dma_device_type": 1 00:10:40.956 }, 00:10:40.956 { 00:10:40.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.956 "dma_device_type": 2 00:10:40.956 } 00:10:40.956 ], 00:10:40.956 "driver_specific": {} 00:10:40.956 } 00:10:40.956 ] 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.956 "name": "Existed_Raid", 00:10:40.956 "uuid": "a914afd6-936f-4cfd-aa3f-ccd2f7911954", 00:10:40.956 "strip_size_kb": 0, 00:10:40.956 "state": "configuring", 00:10:40.956 "raid_level": "raid1", 00:10:40.956 "superblock": true, 00:10:40.956 "num_base_bdevs": 3, 00:10:40.956 "num_base_bdevs_discovered": 2, 00:10:40.956 "num_base_bdevs_operational": 3, 00:10:40.956 "base_bdevs_list": [ 00:10:40.956 { 00:10:40.956 "name": "BaseBdev1", 00:10:40.956 "uuid": "67aea97c-9b4f-43c2-bcda-4d75d6056700", 00:10:40.956 "is_configured": true, 00:10:40.956 "data_offset": 2048, 00:10:40.956 "data_size": 63488 00:10:40.956 }, 00:10:40.956 { 00:10:40.956 "name": null, 00:10:40.956 "uuid": "b94dd4bd-6281-47c9-843f-f7e7a2812a19", 00:10:40.956 "is_configured": false, 00:10:40.956 "data_offset": 0, 00:10:40.956 "data_size": 63488 00:10:40.956 }, 00:10:40.956 { 00:10:40.956 "name": "BaseBdev3", 00:10:40.956 "uuid": "275213b0-8911-4e36-8e4e-85b3aa3a7f98", 00:10:40.956 "is_configured": true, 00:10:40.956 "data_offset": 2048, 00:10:40.956 "data_size": 63488 00:10:40.956 } 00:10:40.956 ] 00:10:40.956 }' 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.956 12:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.217 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.217 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:41.217 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.217 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.476 [2024-09-30 12:27:53.151392] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.476 "name": "Existed_Raid", 00:10:41.476 "uuid": "a914afd6-936f-4cfd-aa3f-ccd2f7911954", 00:10:41.476 "strip_size_kb": 0, 00:10:41.476 "state": "configuring", 00:10:41.476 "raid_level": "raid1", 00:10:41.476 "superblock": true, 00:10:41.476 "num_base_bdevs": 3, 00:10:41.476 "num_base_bdevs_discovered": 1, 00:10:41.476 "num_base_bdevs_operational": 3, 00:10:41.476 "base_bdevs_list": [ 00:10:41.476 { 00:10:41.476 "name": "BaseBdev1", 00:10:41.476 "uuid": "67aea97c-9b4f-43c2-bcda-4d75d6056700", 00:10:41.476 "is_configured": true, 00:10:41.476 "data_offset": 2048, 00:10:41.476 "data_size": 63488 00:10:41.476 }, 00:10:41.476 { 00:10:41.476 "name": null, 00:10:41.476 "uuid": "b94dd4bd-6281-47c9-843f-f7e7a2812a19", 00:10:41.476 "is_configured": false, 00:10:41.476 "data_offset": 0, 00:10:41.476 "data_size": 63488 00:10:41.476 }, 00:10:41.476 { 00:10:41.476 "name": null, 00:10:41.476 "uuid": "275213b0-8911-4e36-8e4e-85b3aa3a7f98", 00:10:41.476 "is_configured": false, 00:10:41.476 "data_offset": 0, 00:10:41.476 "data_size": 63488 00:10:41.476 } 00:10:41.476 ] 00:10:41.476 }' 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.476 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.736 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.736 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.736 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.736 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.736 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.736 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:41.736 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:41.736 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.736 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.995 [2024-09-30 12:27:53.634685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.995 "name": "Existed_Raid", 00:10:41.995 "uuid": "a914afd6-936f-4cfd-aa3f-ccd2f7911954", 00:10:41.995 "strip_size_kb": 0, 00:10:41.995 "state": "configuring", 00:10:41.995 "raid_level": "raid1", 00:10:41.995 "superblock": true, 00:10:41.995 "num_base_bdevs": 3, 00:10:41.995 "num_base_bdevs_discovered": 2, 00:10:41.995 "num_base_bdevs_operational": 3, 00:10:41.995 "base_bdevs_list": [ 00:10:41.995 { 00:10:41.995 "name": "BaseBdev1", 00:10:41.995 "uuid": "67aea97c-9b4f-43c2-bcda-4d75d6056700", 00:10:41.995 "is_configured": true, 00:10:41.995 "data_offset": 2048, 00:10:41.995 "data_size": 63488 00:10:41.995 }, 00:10:41.995 { 00:10:41.995 "name": null, 00:10:41.995 "uuid": "b94dd4bd-6281-47c9-843f-f7e7a2812a19", 00:10:41.995 "is_configured": false, 00:10:41.995 "data_offset": 0, 00:10:41.995 "data_size": 63488 00:10:41.995 }, 00:10:41.995 { 00:10:41.995 "name": "BaseBdev3", 00:10:41.995 "uuid": "275213b0-8911-4e36-8e4e-85b3aa3a7f98", 00:10:41.995 "is_configured": true, 00:10:41.995 "data_offset": 2048, 00:10:41.995 "data_size": 63488 00:10:41.995 } 00:10:41.995 ] 00:10:41.995 }' 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.995 12:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.253 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.253 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:42.253 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.253 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.253 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.253 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:42.253 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:42.253 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.253 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.253 [2024-09-30 12:27:54.121900] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.513 "name": "Existed_Raid", 00:10:42.513 "uuid": "a914afd6-936f-4cfd-aa3f-ccd2f7911954", 00:10:42.513 "strip_size_kb": 0, 00:10:42.513 "state": "configuring", 00:10:42.513 "raid_level": "raid1", 00:10:42.513 "superblock": true, 00:10:42.513 "num_base_bdevs": 3, 00:10:42.513 "num_base_bdevs_discovered": 1, 00:10:42.513 "num_base_bdevs_operational": 3, 00:10:42.513 "base_bdevs_list": [ 00:10:42.513 { 00:10:42.513 "name": null, 00:10:42.513 "uuid": "67aea97c-9b4f-43c2-bcda-4d75d6056700", 00:10:42.513 "is_configured": false, 00:10:42.513 "data_offset": 0, 00:10:42.513 "data_size": 63488 00:10:42.513 }, 00:10:42.513 { 00:10:42.513 "name": null, 00:10:42.513 "uuid": "b94dd4bd-6281-47c9-843f-f7e7a2812a19", 00:10:42.513 "is_configured": false, 00:10:42.513 "data_offset": 0, 00:10:42.513 "data_size": 63488 00:10:42.513 }, 00:10:42.513 { 00:10:42.513 "name": "BaseBdev3", 00:10:42.513 "uuid": "275213b0-8911-4e36-8e4e-85b3aa3a7f98", 00:10:42.513 "is_configured": true, 00:10:42.513 "data_offset": 2048, 00:10:42.513 "data_size": 63488 00:10:42.513 } 00:10:42.513 ] 00:10:42.513 }' 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.513 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.773 [2024-09-30 12:27:54.613618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.773 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.033 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.033 "name": "Existed_Raid", 00:10:43.033 "uuid": "a914afd6-936f-4cfd-aa3f-ccd2f7911954", 00:10:43.033 "strip_size_kb": 0, 00:10:43.033 "state": "configuring", 00:10:43.033 "raid_level": "raid1", 00:10:43.033 "superblock": true, 00:10:43.033 "num_base_bdevs": 3, 00:10:43.033 "num_base_bdevs_discovered": 2, 00:10:43.033 "num_base_bdevs_operational": 3, 00:10:43.033 "base_bdevs_list": [ 00:10:43.033 { 00:10:43.033 "name": null, 00:10:43.033 "uuid": "67aea97c-9b4f-43c2-bcda-4d75d6056700", 00:10:43.033 "is_configured": false, 00:10:43.033 "data_offset": 0, 00:10:43.033 "data_size": 63488 00:10:43.033 }, 00:10:43.033 { 00:10:43.033 "name": "BaseBdev2", 00:10:43.033 "uuid": "b94dd4bd-6281-47c9-843f-f7e7a2812a19", 00:10:43.033 "is_configured": true, 00:10:43.033 "data_offset": 2048, 00:10:43.033 "data_size": 63488 00:10:43.033 }, 00:10:43.033 { 00:10:43.033 "name": "BaseBdev3", 00:10:43.033 "uuid": "275213b0-8911-4e36-8e4e-85b3aa3a7f98", 00:10:43.033 "is_configured": true, 00:10:43.033 "data_offset": 2048, 00:10:43.033 "data_size": 63488 00:10:43.033 } 00:10:43.033 ] 00:10:43.033 }' 00:10:43.033 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.033 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 67aea97c-9b4f-43c2-bcda-4d75d6056700 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.292 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.292 [2024-09-30 12:27:55.153484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:43.293 [2024-09-30 12:27:55.153826] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:43.293 [2024-09-30 12:27:55.153888] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:43.293 [2024-09-30 12:27:55.154177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:43.293 [2024-09-30 12:27:55.154386] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:43.293 NewBaseBdev 00:10:43.293 [2024-09-30 12:27:55.154447] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:43.293 [2024-09-30 12:27:55.154631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.293 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.293 [ 00:10:43.293 { 00:10:43.293 "name": "NewBaseBdev", 00:10:43.293 "aliases": [ 00:10:43.293 "67aea97c-9b4f-43c2-bcda-4d75d6056700" 00:10:43.293 ], 00:10:43.293 "product_name": "Malloc disk", 00:10:43.293 "block_size": 512, 00:10:43.293 "num_blocks": 65536, 00:10:43.293 "uuid": "67aea97c-9b4f-43c2-bcda-4d75d6056700", 00:10:43.293 "assigned_rate_limits": { 00:10:43.293 "rw_ios_per_sec": 0, 00:10:43.293 "rw_mbytes_per_sec": 0, 00:10:43.293 "r_mbytes_per_sec": 0, 00:10:43.293 "w_mbytes_per_sec": 0 00:10:43.293 }, 00:10:43.293 "claimed": true, 00:10:43.293 "claim_type": "exclusive_write", 00:10:43.293 "zoned": false, 00:10:43.293 "supported_io_types": { 00:10:43.293 "read": true, 00:10:43.293 "write": true, 00:10:43.293 "unmap": true, 00:10:43.293 "flush": true, 00:10:43.293 "reset": true, 00:10:43.293 "nvme_admin": false, 00:10:43.293 "nvme_io": false, 00:10:43.293 "nvme_io_md": false, 00:10:43.293 "write_zeroes": true, 00:10:43.293 "zcopy": true, 00:10:43.293 "get_zone_info": false, 00:10:43.293 "zone_management": false, 00:10:43.293 "zone_append": false, 00:10:43.293 "compare": false, 00:10:43.293 "compare_and_write": false, 00:10:43.293 "abort": true, 00:10:43.293 "seek_hole": false, 00:10:43.293 "seek_data": false, 00:10:43.293 "copy": true, 00:10:43.293 "nvme_iov_md": false 00:10:43.293 }, 00:10:43.293 "memory_domains": [ 00:10:43.293 { 00:10:43.552 "dma_device_id": "system", 00:10:43.552 "dma_device_type": 1 00:10:43.552 }, 00:10:43.552 { 00:10:43.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.552 "dma_device_type": 2 00:10:43.552 } 00:10:43.552 ], 00:10:43.552 "driver_specific": {} 00:10:43.552 } 00:10:43.552 ] 00:10:43.552 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.552 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:43.552 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:43.552 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.552 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.552 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.553 "name": "Existed_Raid", 00:10:43.553 "uuid": "a914afd6-936f-4cfd-aa3f-ccd2f7911954", 00:10:43.553 "strip_size_kb": 0, 00:10:43.553 "state": "online", 00:10:43.553 "raid_level": "raid1", 00:10:43.553 "superblock": true, 00:10:43.553 "num_base_bdevs": 3, 00:10:43.553 "num_base_bdevs_discovered": 3, 00:10:43.553 "num_base_bdevs_operational": 3, 00:10:43.553 "base_bdevs_list": [ 00:10:43.553 { 00:10:43.553 "name": "NewBaseBdev", 00:10:43.553 "uuid": "67aea97c-9b4f-43c2-bcda-4d75d6056700", 00:10:43.553 "is_configured": true, 00:10:43.553 "data_offset": 2048, 00:10:43.553 "data_size": 63488 00:10:43.553 }, 00:10:43.553 { 00:10:43.553 "name": "BaseBdev2", 00:10:43.553 "uuid": "b94dd4bd-6281-47c9-843f-f7e7a2812a19", 00:10:43.553 "is_configured": true, 00:10:43.553 "data_offset": 2048, 00:10:43.553 "data_size": 63488 00:10:43.553 }, 00:10:43.553 { 00:10:43.553 "name": "BaseBdev3", 00:10:43.553 "uuid": "275213b0-8911-4e36-8e4e-85b3aa3a7f98", 00:10:43.553 "is_configured": true, 00:10:43.553 "data_offset": 2048, 00:10:43.553 "data_size": 63488 00:10:43.553 } 00:10:43.553 ] 00:10:43.553 }' 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.553 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.812 [2024-09-30 12:27:55.629016] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.812 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.812 "name": "Existed_Raid", 00:10:43.812 "aliases": [ 00:10:43.812 "a914afd6-936f-4cfd-aa3f-ccd2f7911954" 00:10:43.812 ], 00:10:43.812 "product_name": "Raid Volume", 00:10:43.812 "block_size": 512, 00:10:43.812 "num_blocks": 63488, 00:10:43.812 "uuid": "a914afd6-936f-4cfd-aa3f-ccd2f7911954", 00:10:43.812 "assigned_rate_limits": { 00:10:43.812 "rw_ios_per_sec": 0, 00:10:43.812 "rw_mbytes_per_sec": 0, 00:10:43.812 "r_mbytes_per_sec": 0, 00:10:43.812 "w_mbytes_per_sec": 0 00:10:43.812 }, 00:10:43.812 "claimed": false, 00:10:43.812 "zoned": false, 00:10:43.812 "supported_io_types": { 00:10:43.812 "read": true, 00:10:43.812 "write": true, 00:10:43.812 "unmap": false, 00:10:43.812 "flush": false, 00:10:43.812 "reset": true, 00:10:43.812 "nvme_admin": false, 00:10:43.812 "nvme_io": false, 00:10:43.812 "nvme_io_md": false, 00:10:43.812 "write_zeroes": true, 00:10:43.812 "zcopy": false, 00:10:43.812 "get_zone_info": false, 00:10:43.812 "zone_management": false, 00:10:43.812 "zone_append": false, 00:10:43.812 "compare": false, 00:10:43.812 "compare_and_write": false, 00:10:43.812 "abort": false, 00:10:43.812 "seek_hole": false, 00:10:43.812 "seek_data": false, 00:10:43.812 "copy": false, 00:10:43.812 "nvme_iov_md": false 00:10:43.812 }, 00:10:43.812 "memory_domains": [ 00:10:43.812 { 00:10:43.812 "dma_device_id": "system", 00:10:43.812 "dma_device_type": 1 00:10:43.812 }, 00:10:43.812 { 00:10:43.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.812 "dma_device_type": 2 00:10:43.812 }, 00:10:43.812 { 00:10:43.812 "dma_device_id": "system", 00:10:43.812 "dma_device_type": 1 00:10:43.812 }, 00:10:43.812 { 00:10:43.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.812 "dma_device_type": 2 00:10:43.812 }, 00:10:43.812 { 00:10:43.812 "dma_device_id": "system", 00:10:43.812 "dma_device_type": 1 00:10:43.812 }, 00:10:43.812 { 00:10:43.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.812 "dma_device_type": 2 00:10:43.812 } 00:10:43.812 ], 00:10:43.812 "driver_specific": { 00:10:43.812 "raid": { 00:10:43.812 "uuid": "a914afd6-936f-4cfd-aa3f-ccd2f7911954", 00:10:43.812 "strip_size_kb": 0, 00:10:43.812 "state": "online", 00:10:43.812 "raid_level": "raid1", 00:10:43.812 "superblock": true, 00:10:43.812 "num_base_bdevs": 3, 00:10:43.812 "num_base_bdevs_discovered": 3, 00:10:43.812 "num_base_bdevs_operational": 3, 00:10:43.812 "base_bdevs_list": [ 00:10:43.812 { 00:10:43.812 "name": "NewBaseBdev", 00:10:43.812 "uuid": "67aea97c-9b4f-43c2-bcda-4d75d6056700", 00:10:43.812 "is_configured": true, 00:10:43.812 "data_offset": 2048, 00:10:43.813 "data_size": 63488 00:10:43.813 }, 00:10:43.813 { 00:10:43.813 "name": "BaseBdev2", 00:10:43.813 "uuid": "b94dd4bd-6281-47c9-843f-f7e7a2812a19", 00:10:43.813 "is_configured": true, 00:10:43.813 "data_offset": 2048, 00:10:43.813 "data_size": 63488 00:10:43.813 }, 00:10:43.813 { 00:10:43.813 "name": "BaseBdev3", 00:10:43.813 "uuid": "275213b0-8911-4e36-8e4e-85b3aa3a7f98", 00:10:43.813 "is_configured": true, 00:10:43.813 "data_offset": 2048, 00:10:43.813 "data_size": 63488 00:10:43.813 } 00:10:43.813 ] 00:10:43.813 } 00:10:43.813 } 00:10:43.813 }' 00:10:43.813 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:44.072 BaseBdev2 00:10:44.072 BaseBdev3' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.072 [2024-09-30 12:27:55.900244] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.072 [2024-09-30 12:27:55.900327] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.072 [2024-09-30 12:27:55.900401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.072 [2024-09-30 12:27:55.900687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.072 [2024-09-30 12:27:55.900699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67904 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 67904 ']' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 67904 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67904 00:10:44.072 killing process with pid 67904 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:44.072 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67904' 00:10:44.073 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 67904 00:10:44.073 [2024-09-30 12:27:55.936516] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.073 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 67904 00:10:44.641 [2024-09-30 12:27:56.228756] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.580 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:45.580 ************************************ 00:10:45.580 END TEST raid_state_function_test_sb 00:10:45.580 ************************************ 00:10:45.580 00:10:45.580 real 0m10.283s 00:10:45.580 user 0m16.198s 00:10:45.580 sys 0m1.851s 00:10:45.580 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.580 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.839 12:27:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:45.839 12:27:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:45.839 12:27:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.839 12:27:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.839 ************************************ 00:10:45.839 START TEST raid_superblock_test 00:10:45.839 ************************************ 00:10:45.839 12:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:10:45.839 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:45.839 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68519 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68519 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 68519 ']' 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.840 12:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.840 [2024-09-30 12:27:57.632094] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:45.840 [2024-09-30 12:27:57.633152] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68519 ] 00:10:46.099 [2024-09-30 12:27:57.809475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.358 [2024-09-30 12:27:58.017468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.358 [2024-09-30 12:27:58.214134] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.358 [2024-09-30 12:27:58.214288] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.618 malloc1 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.618 [2024-09-30 12:27:58.504389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:46.618 [2024-09-30 12:27:58.504508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.618 [2024-09-30 12:27:58.504557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:46.618 [2024-09-30 12:27:58.504612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.618 [2024-09-30 12:27:58.506638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.618 [2024-09-30 12:27:58.506734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:46.618 pt1 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.618 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.878 malloc2 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.878 [2024-09-30 12:27:58.571200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.878 [2024-09-30 12:27:58.571304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.878 [2024-09-30 12:27:58.571332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:46.878 [2024-09-30 12:27:58.571344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.878 [2024-09-30 12:27:58.573400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.878 [2024-09-30 12:27:58.573441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.878 pt2 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.878 malloc3 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.878 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.878 [2024-09-30 12:27:58.623335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:46.878 [2024-09-30 12:27:58.623456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.878 [2024-09-30 12:27:58.623498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:46.879 [2024-09-30 12:27:58.623531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.879 [2024-09-30 12:27:58.625536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.879 [2024-09-30 12:27:58.625632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:46.879 pt3 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.879 [2024-09-30 12:27:58.635405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:46.879 [2024-09-30 12:27:58.637292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.879 [2024-09-30 12:27:58.637413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:46.879 [2024-09-30 12:27:58.637596] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:46.879 [2024-09-30 12:27:58.637651] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:46.879 [2024-09-30 12:27:58.637918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:46.879 [2024-09-30 12:27:58.638142] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:46.879 [2024-09-30 12:27:58.638190] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:46.879 [2024-09-30 12:27:58.638392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.879 "name": "raid_bdev1", 00:10:46.879 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:46.879 "strip_size_kb": 0, 00:10:46.879 "state": "online", 00:10:46.879 "raid_level": "raid1", 00:10:46.879 "superblock": true, 00:10:46.879 "num_base_bdevs": 3, 00:10:46.879 "num_base_bdevs_discovered": 3, 00:10:46.879 "num_base_bdevs_operational": 3, 00:10:46.879 "base_bdevs_list": [ 00:10:46.879 { 00:10:46.879 "name": "pt1", 00:10:46.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.879 "is_configured": true, 00:10:46.879 "data_offset": 2048, 00:10:46.879 "data_size": 63488 00:10:46.879 }, 00:10:46.879 { 00:10:46.879 "name": "pt2", 00:10:46.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.879 "is_configured": true, 00:10:46.879 "data_offset": 2048, 00:10:46.879 "data_size": 63488 00:10:46.879 }, 00:10:46.879 { 00:10:46.879 "name": "pt3", 00:10:46.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.879 "is_configured": true, 00:10:46.879 "data_offset": 2048, 00:10:46.879 "data_size": 63488 00:10:46.879 } 00:10:46.879 ] 00:10:46.879 }' 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.879 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.138 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:47.138 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:47.138 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.138 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.138 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.138 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.138 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:47.138 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.138 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.138 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.138 [2024-09-30 12:27:59.031132] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.397 "name": "raid_bdev1", 00:10:47.397 "aliases": [ 00:10:47.397 "792684c4-c9d6-429d-acef-694c6c928c28" 00:10:47.397 ], 00:10:47.397 "product_name": "Raid Volume", 00:10:47.397 "block_size": 512, 00:10:47.397 "num_blocks": 63488, 00:10:47.397 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:47.397 "assigned_rate_limits": { 00:10:47.397 "rw_ios_per_sec": 0, 00:10:47.397 "rw_mbytes_per_sec": 0, 00:10:47.397 "r_mbytes_per_sec": 0, 00:10:47.397 "w_mbytes_per_sec": 0 00:10:47.397 }, 00:10:47.397 "claimed": false, 00:10:47.397 "zoned": false, 00:10:47.397 "supported_io_types": { 00:10:47.397 "read": true, 00:10:47.397 "write": true, 00:10:47.397 "unmap": false, 00:10:47.397 "flush": false, 00:10:47.397 "reset": true, 00:10:47.397 "nvme_admin": false, 00:10:47.397 "nvme_io": false, 00:10:47.397 "nvme_io_md": false, 00:10:47.397 "write_zeroes": true, 00:10:47.397 "zcopy": false, 00:10:47.397 "get_zone_info": false, 00:10:47.397 "zone_management": false, 00:10:47.397 "zone_append": false, 00:10:47.397 "compare": false, 00:10:47.397 "compare_and_write": false, 00:10:47.397 "abort": false, 00:10:47.397 "seek_hole": false, 00:10:47.397 "seek_data": false, 00:10:47.397 "copy": false, 00:10:47.397 "nvme_iov_md": false 00:10:47.397 }, 00:10:47.397 "memory_domains": [ 00:10:47.397 { 00:10:47.397 "dma_device_id": "system", 00:10:47.397 "dma_device_type": 1 00:10:47.397 }, 00:10:47.397 { 00:10:47.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.397 "dma_device_type": 2 00:10:47.397 }, 00:10:47.397 { 00:10:47.397 "dma_device_id": "system", 00:10:47.397 "dma_device_type": 1 00:10:47.397 }, 00:10:47.397 { 00:10:47.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.397 "dma_device_type": 2 00:10:47.397 }, 00:10:47.397 { 00:10:47.397 "dma_device_id": "system", 00:10:47.397 "dma_device_type": 1 00:10:47.397 }, 00:10:47.397 { 00:10:47.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.397 "dma_device_type": 2 00:10:47.397 } 00:10:47.397 ], 00:10:47.397 "driver_specific": { 00:10:47.397 "raid": { 00:10:47.397 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:47.397 "strip_size_kb": 0, 00:10:47.397 "state": "online", 00:10:47.397 "raid_level": "raid1", 00:10:47.397 "superblock": true, 00:10:47.397 "num_base_bdevs": 3, 00:10:47.397 "num_base_bdevs_discovered": 3, 00:10:47.397 "num_base_bdevs_operational": 3, 00:10:47.397 "base_bdevs_list": [ 00:10:47.397 { 00:10:47.397 "name": "pt1", 00:10:47.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.397 "is_configured": true, 00:10:47.397 "data_offset": 2048, 00:10:47.397 "data_size": 63488 00:10:47.397 }, 00:10:47.397 { 00:10:47.397 "name": "pt2", 00:10:47.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.397 "is_configured": true, 00:10:47.397 "data_offset": 2048, 00:10:47.397 "data_size": 63488 00:10:47.397 }, 00:10:47.397 { 00:10:47.397 "name": "pt3", 00:10:47.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.397 "is_configured": true, 00:10:47.397 "data_offset": 2048, 00:10:47.397 "data_size": 63488 00:10:47.397 } 00:10:47.397 ] 00:10:47.397 } 00:10:47.397 } 00:10:47.397 }' 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:47.397 pt2 00:10:47.397 pt3' 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.397 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:47.657 [2024-09-30 12:27:59.298515] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=792684c4-c9d6-429d-acef-694c6c928c28 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 792684c4-c9d6-429d-acef-694c6c928c28 ']' 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.657 [2024-09-30 12:27:59.346172] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.657 [2024-09-30 12:27:59.346200] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.657 [2024-09-30 12:27:59.346279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.657 [2024-09-30 12:27:59.346365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.657 [2024-09-30 12:27:59.346375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.657 [2024-09-30 12:27:59.497938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:47.657 [2024-09-30 12:27:59.500111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:47.657 [2024-09-30 12:27:59.500197] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:47.657 [2024-09-30 12:27:59.500281] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:47.657 [2024-09-30 12:27:59.500384] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:47.657 [2024-09-30 12:27:59.500438] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:47.657 [2024-09-30 12:27:59.500494] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.657 [2024-09-30 12:27:59.500526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:47.657 request: 00:10:47.657 { 00:10:47.657 "name": "raid_bdev1", 00:10:47.657 "raid_level": "raid1", 00:10:47.657 "base_bdevs": [ 00:10:47.657 "malloc1", 00:10:47.657 "malloc2", 00:10:47.657 "malloc3" 00:10:47.657 ], 00:10:47.657 "superblock": false, 00:10:47.657 "method": "bdev_raid_create", 00:10:47.657 "req_id": 1 00:10:47.657 } 00:10:47.657 Got JSON-RPC error response 00:10:47.657 response: 00:10:47.657 { 00:10:47.657 "code": -17, 00:10:47.657 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:47.657 } 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:47.657 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.917 [2024-09-30 12:27:59.561821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:47.917 [2024-09-30 12:27:59.561909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.917 [2024-09-30 12:27:59.561937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:47.917 [2024-09-30 12:27:59.561946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.917 [2024-09-30 12:27:59.564380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.917 [2024-09-30 12:27:59.564415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:47.917 [2024-09-30 12:27:59.564483] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:47.917 [2024-09-30 12:27:59.564529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:47.917 pt1 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.917 "name": "raid_bdev1", 00:10:47.917 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:47.917 "strip_size_kb": 0, 00:10:47.917 "state": "configuring", 00:10:47.917 "raid_level": "raid1", 00:10:47.917 "superblock": true, 00:10:47.917 "num_base_bdevs": 3, 00:10:47.917 "num_base_bdevs_discovered": 1, 00:10:47.917 "num_base_bdevs_operational": 3, 00:10:47.917 "base_bdevs_list": [ 00:10:47.917 { 00:10:47.917 "name": "pt1", 00:10:47.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.917 "is_configured": true, 00:10:47.917 "data_offset": 2048, 00:10:47.917 "data_size": 63488 00:10:47.917 }, 00:10:47.917 { 00:10:47.917 "name": null, 00:10:47.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.917 "is_configured": false, 00:10:47.917 "data_offset": 2048, 00:10:47.917 "data_size": 63488 00:10:47.917 }, 00:10:47.917 { 00:10:47.917 "name": null, 00:10:47.917 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.917 "is_configured": false, 00:10:47.917 "data_offset": 2048, 00:10:47.917 "data_size": 63488 00:10:47.917 } 00:10:47.917 ] 00:10:47.917 }' 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.917 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.177 [2024-09-30 12:27:59.973102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:48.177 [2024-09-30 12:27:59.973215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.177 [2024-09-30 12:27:59.973258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:48.177 [2024-09-30 12:27:59.973287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.177 [2024-09-30 12:27:59.973747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.177 [2024-09-30 12:27:59.973811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:48.177 [2024-09-30 12:27:59.973912] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:48.177 [2024-09-30 12:27:59.973961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.177 pt2 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.177 [2024-09-30 12:27:59.985092] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.177 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.177 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.177 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.177 "name": "raid_bdev1", 00:10:48.177 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:48.177 "strip_size_kb": 0, 00:10:48.177 "state": "configuring", 00:10:48.177 "raid_level": "raid1", 00:10:48.177 "superblock": true, 00:10:48.177 "num_base_bdevs": 3, 00:10:48.177 "num_base_bdevs_discovered": 1, 00:10:48.177 "num_base_bdevs_operational": 3, 00:10:48.177 "base_bdevs_list": [ 00:10:48.177 { 00:10:48.177 "name": "pt1", 00:10:48.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.177 "is_configured": true, 00:10:48.177 "data_offset": 2048, 00:10:48.177 "data_size": 63488 00:10:48.177 }, 00:10:48.177 { 00:10:48.177 "name": null, 00:10:48.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.177 "is_configured": false, 00:10:48.177 "data_offset": 0, 00:10:48.177 "data_size": 63488 00:10:48.177 }, 00:10:48.177 { 00:10:48.177 "name": null, 00:10:48.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.177 "is_configured": false, 00:10:48.177 "data_offset": 2048, 00:10:48.177 "data_size": 63488 00:10:48.177 } 00:10:48.177 ] 00:10:48.177 }' 00:10:48.177 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.177 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.746 [2024-09-30 12:28:00.376406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:48.746 [2024-09-30 12:28:00.376479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.746 [2024-09-30 12:28:00.376498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:48.746 [2024-09-30 12:28:00.376510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.746 [2024-09-30 12:28:00.377003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.746 [2024-09-30 12:28:00.377029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:48.746 [2024-09-30 12:28:00.377126] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:48.746 [2024-09-30 12:28:00.377160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.746 pt2 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.746 [2024-09-30 12:28:00.388395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:48.746 [2024-09-30 12:28:00.388484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.746 [2024-09-30 12:28:00.388548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:48.746 [2024-09-30 12:28:00.388581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.746 [2024-09-30 12:28:00.388993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.746 [2024-09-30 12:28:00.389051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:48.746 [2024-09-30 12:28:00.389136] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:48.746 [2024-09-30 12:28:00.389186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:48.746 [2024-09-30 12:28:00.389312] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:48.746 [2024-09-30 12:28:00.389324] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:48.746 [2024-09-30 12:28:00.389578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:48.746 [2024-09-30 12:28:00.389729] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:48.746 [2024-09-30 12:28:00.389738] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:48.746 [2024-09-30 12:28:00.389915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.746 pt3 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.746 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.746 "name": "raid_bdev1", 00:10:48.746 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:48.746 "strip_size_kb": 0, 00:10:48.746 "state": "online", 00:10:48.746 "raid_level": "raid1", 00:10:48.746 "superblock": true, 00:10:48.746 "num_base_bdevs": 3, 00:10:48.746 "num_base_bdevs_discovered": 3, 00:10:48.746 "num_base_bdevs_operational": 3, 00:10:48.746 "base_bdevs_list": [ 00:10:48.746 { 00:10:48.746 "name": "pt1", 00:10:48.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.746 "is_configured": true, 00:10:48.746 "data_offset": 2048, 00:10:48.746 "data_size": 63488 00:10:48.746 }, 00:10:48.747 { 00:10:48.747 "name": "pt2", 00:10:48.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.747 "is_configured": true, 00:10:48.747 "data_offset": 2048, 00:10:48.747 "data_size": 63488 00:10:48.747 }, 00:10:48.747 { 00:10:48.747 "name": "pt3", 00:10:48.747 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.747 "is_configured": true, 00:10:48.747 "data_offset": 2048, 00:10:48.747 "data_size": 63488 00:10:48.747 } 00:10:48.747 ] 00:10:48.747 }' 00:10:48.747 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.747 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.006 [2024-09-30 12:28:00.835910] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.006 "name": "raid_bdev1", 00:10:49.006 "aliases": [ 00:10:49.006 "792684c4-c9d6-429d-acef-694c6c928c28" 00:10:49.006 ], 00:10:49.006 "product_name": "Raid Volume", 00:10:49.006 "block_size": 512, 00:10:49.006 "num_blocks": 63488, 00:10:49.006 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:49.006 "assigned_rate_limits": { 00:10:49.006 "rw_ios_per_sec": 0, 00:10:49.006 "rw_mbytes_per_sec": 0, 00:10:49.006 "r_mbytes_per_sec": 0, 00:10:49.006 "w_mbytes_per_sec": 0 00:10:49.006 }, 00:10:49.006 "claimed": false, 00:10:49.006 "zoned": false, 00:10:49.006 "supported_io_types": { 00:10:49.006 "read": true, 00:10:49.006 "write": true, 00:10:49.006 "unmap": false, 00:10:49.006 "flush": false, 00:10:49.006 "reset": true, 00:10:49.006 "nvme_admin": false, 00:10:49.006 "nvme_io": false, 00:10:49.006 "nvme_io_md": false, 00:10:49.006 "write_zeroes": true, 00:10:49.006 "zcopy": false, 00:10:49.006 "get_zone_info": false, 00:10:49.006 "zone_management": false, 00:10:49.006 "zone_append": false, 00:10:49.006 "compare": false, 00:10:49.006 "compare_and_write": false, 00:10:49.006 "abort": false, 00:10:49.006 "seek_hole": false, 00:10:49.006 "seek_data": false, 00:10:49.006 "copy": false, 00:10:49.006 "nvme_iov_md": false 00:10:49.006 }, 00:10:49.006 "memory_domains": [ 00:10:49.006 { 00:10:49.006 "dma_device_id": "system", 00:10:49.006 "dma_device_type": 1 00:10:49.006 }, 00:10:49.006 { 00:10:49.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.006 "dma_device_type": 2 00:10:49.006 }, 00:10:49.006 { 00:10:49.006 "dma_device_id": "system", 00:10:49.006 "dma_device_type": 1 00:10:49.006 }, 00:10:49.006 { 00:10:49.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.006 "dma_device_type": 2 00:10:49.006 }, 00:10:49.006 { 00:10:49.006 "dma_device_id": "system", 00:10:49.006 "dma_device_type": 1 00:10:49.006 }, 00:10:49.006 { 00:10:49.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.006 "dma_device_type": 2 00:10:49.006 } 00:10:49.006 ], 00:10:49.006 "driver_specific": { 00:10:49.006 "raid": { 00:10:49.006 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:49.006 "strip_size_kb": 0, 00:10:49.006 "state": "online", 00:10:49.006 "raid_level": "raid1", 00:10:49.006 "superblock": true, 00:10:49.006 "num_base_bdevs": 3, 00:10:49.006 "num_base_bdevs_discovered": 3, 00:10:49.006 "num_base_bdevs_operational": 3, 00:10:49.006 "base_bdevs_list": [ 00:10:49.006 { 00:10:49.006 "name": "pt1", 00:10:49.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.006 "is_configured": true, 00:10:49.006 "data_offset": 2048, 00:10:49.006 "data_size": 63488 00:10:49.006 }, 00:10:49.006 { 00:10:49.006 "name": "pt2", 00:10:49.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.006 "is_configured": true, 00:10:49.006 "data_offset": 2048, 00:10:49.006 "data_size": 63488 00:10:49.006 }, 00:10:49.006 { 00:10:49.006 "name": "pt3", 00:10:49.006 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.006 "is_configured": true, 00:10:49.006 "data_offset": 2048, 00:10:49.006 "data_size": 63488 00:10:49.006 } 00:10:49.006 ] 00:10:49.006 } 00:10:49.006 } 00:10:49.006 }' 00:10:49.006 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.265 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:49.265 pt2 00:10:49.265 pt3' 00:10:49.265 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.265 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.265 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.265 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:49.266 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.266 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.266 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.266 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.266 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.266 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.266 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.266 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:49.266 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.266 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.266 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.266 [2024-09-30 12:28:01.087582] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 792684c4-c9d6-429d-acef-694c6c928c28 '!=' 792684c4-c9d6-429d-acef-694c6c928c28 ']' 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.266 [2024-09-30 12:28:01.123300] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.266 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.525 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.525 "name": "raid_bdev1", 00:10:49.525 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:49.525 "strip_size_kb": 0, 00:10:49.525 "state": "online", 00:10:49.525 "raid_level": "raid1", 00:10:49.525 "superblock": true, 00:10:49.525 "num_base_bdevs": 3, 00:10:49.525 "num_base_bdevs_discovered": 2, 00:10:49.525 "num_base_bdevs_operational": 2, 00:10:49.525 "base_bdevs_list": [ 00:10:49.525 { 00:10:49.525 "name": null, 00:10:49.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.525 "is_configured": false, 00:10:49.525 "data_offset": 0, 00:10:49.525 "data_size": 63488 00:10:49.525 }, 00:10:49.525 { 00:10:49.525 "name": "pt2", 00:10:49.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.525 "is_configured": true, 00:10:49.525 "data_offset": 2048, 00:10:49.525 "data_size": 63488 00:10:49.525 }, 00:10:49.525 { 00:10:49.525 "name": "pt3", 00:10:49.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.525 "is_configured": true, 00:10:49.525 "data_offset": 2048, 00:10:49.525 "data_size": 63488 00:10:49.525 } 00:10:49.525 ] 00:10:49.525 }' 00:10:49.525 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.525 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.784 [2024-09-30 12:28:01.578483] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.784 [2024-09-30 12:28:01.578561] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.784 [2024-09-30 12:28:01.578658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.784 [2024-09-30 12:28:01.578764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.784 [2024-09-30 12:28:01.578816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.784 [2024-09-30 12:28:01.646378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.784 [2024-09-30 12:28:01.646429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.784 [2024-09-30 12:28:01.646446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:49.784 [2024-09-30 12:28:01.646458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.784 [2024-09-30 12:28:01.648911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.784 [2024-09-30 12:28:01.648948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.784 [2024-09-30 12:28:01.649025] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:49.784 [2024-09-30 12:28:01.649070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.784 pt2 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.784 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.043 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.043 "name": "raid_bdev1", 00:10:50.043 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:50.043 "strip_size_kb": 0, 00:10:50.043 "state": "configuring", 00:10:50.043 "raid_level": "raid1", 00:10:50.043 "superblock": true, 00:10:50.043 "num_base_bdevs": 3, 00:10:50.043 "num_base_bdevs_discovered": 1, 00:10:50.043 "num_base_bdevs_operational": 2, 00:10:50.043 "base_bdevs_list": [ 00:10:50.043 { 00:10:50.043 "name": null, 00:10:50.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.043 "is_configured": false, 00:10:50.043 "data_offset": 2048, 00:10:50.043 "data_size": 63488 00:10:50.043 }, 00:10:50.043 { 00:10:50.043 "name": "pt2", 00:10:50.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.043 "is_configured": true, 00:10:50.043 "data_offset": 2048, 00:10:50.043 "data_size": 63488 00:10:50.043 }, 00:10:50.043 { 00:10:50.043 "name": null, 00:10:50.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.043 "is_configured": false, 00:10:50.043 "data_offset": 2048, 00:10:50.043 "data_size": 63488 00:10:50.043 } 00:10:50.043 ] 00:10:50.043 }' 00:10:50.043 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.043 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.302 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:50.302 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:50.302 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:50.302 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:50.302 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.302 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.302 [2024-09-30 12:28:02.029697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:50.302 [2024-09-30 12:28:02.029813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.302 [2024-09-30 12:28:02.029849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:50.302 [2024-09-30 12:28:02.029878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.302 [2024-09-30 12:28:02.030333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.302 [2024-09-30 12:28:02.030390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:50.302 [2024-09-30 12:28:02.030488] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:50.302 [2024-09-30 12:28:02.030544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:50.303 [2024-09-30 12:28:02.030685] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:50.303 [2024-09-30 12:28:02.030724] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:50.303 [2024-09-30 12:28:02.031009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:50.303 [2024-09-30 12:28:02.031213] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:50.303 [2024-09-30 12:28:02.031248] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:50.303 [2024-09-30 12:28:02.031453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.303 pt3 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.303 "name": "raid_bdev1", 00:10:50.303 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:50.303 "strip_size_kb": 0, 00:10:50.303 "state": "online", 00:10:50.303 "raid_level": "raid1", 00:10:50.303 "superblock": true, 00:10:50.303 "num_base_bdevs": 3, 00:10:50.303 "num_base_bdevs_discovered": 2, 00:10:50.303 "num_base_bdevs_operational": 2, 00:10:50.303 "base_bdevs_list": [ 00:10:50.303 { 00:10:50.303 "name": null, 00:10:50.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.303 "is_configured": false, 00:10:50.303 "data_offset": 2048, 00:10:50.303 "data_size": 63488 00:10:50.303 }, 00:10:50.303 { 00:10:50.303 "name": "pt2", 00:10:50.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.303 "is_configured": true, 00:10:50.303 "data_offset": 2048, 00:10:50.303 "data_size": 63488 00:10:50.303 }, 00:10:50.303 { 00:10:50.303 "name": "pt3", 00:10:50.303 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.303 "is_configured": true, 00:10:50.303 "data_offset": 2048, 00:10:50.303 "data_size": 63488 00:10:50.303 } 00:10:50.303 ] 00:10:50.303 }' 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.303 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.870 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.870 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.870 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.870 [2024-09-30 12:28:02.472940] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.870 [2024-09-30 12:28:02.472973] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.870 [2024-09-30 12:28:02.473045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.870 [2024-09-30 12:28:02.473106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.870 [2024-09-30 12:28:02.473116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.871 [2024-09-30 12:28:02.540856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:50.871 [2024-09-30 12:28:02.540943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.871 [2024-09-30 12:28:02.540997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:50.871 [2024-09-30 12:28:02.541024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.871 [2024-09-30 12:28:02.543463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.871 [2024-09-30 12:28:02.543531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:50.871 [2024-09-30 12:28:02.543646] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:50.871 [2024-09-30 12:28:02.543720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:50.871 [2024-09-30 12:28:02.543898] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:50.871 [2024-09-30 12:28:02.543952] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.871 [2024-09-30 12:28:02.543992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:50.871 [2024-09-30 12:28:02.544099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.871 pt1 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.871 "name": "raid_bdev1", 00:10:50.871 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:50.871 "strip_size_kb": 0, 00:10:50.871 "state": "configuring", 00:10:50.871 "raid_level": "raid1", 00:10:50.871 "superblock": true, 00:10:50.871 "num_base_bdevs": 3, 00:10:50.871 "num_base_bdevs_discovered": 1, 00:10:50.871 "num_base_bdevs_operational": 2, 00:10:50.871 "base_bdevs_list": [ 00:10:50.871 { 00:10:50.871 "name": null, 00:10:50.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.871 "is_configured": false, 00:10:50.871 "data_offset": 2048, 00:10:50.871 "data_size": 63488 00:10:50.871 }, 00:10:50.871 { 00:10:50.871 "name": "pt2", 00:10:50.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.871 "is_configured": true, 00:10:50.871 "data_offset": 2048, 00:10:50.871 "data_size": 63488 00:10:50.871 }, 00:10:50.871 { 00:10:50.871 "name": null, 00:10:50.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.871 "is_configured": false, 00:10:50.871 "data_offset": 2048, 00:10:50.871 "data_size": 63488 00:10:50.871 } 00:10:50.871 ] 00:10:50.871 }' 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.871 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.130 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:51.130 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:51.130 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.130 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.130 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.130 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:51.130 12:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:51.130 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.130 12:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.130 [2024-09-30 12:28:03.004033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:51.130 [2024-09-30 12:28:03.004126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.130 [2024-09-30 12:28:03.004161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:51.130 [2024-09-30 12:28:03.004192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.130 [2024-09-30 12:28:03.004620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.130 [2024-09-30 12:28:03.004672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:51.130 [2024-09-30 12:28:03.004780] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:51.130 [2024-09-30 12:28:03.004853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:51.130 [2024-09-30 12:28:03.005029] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:51.130 [2024-09-30 12:28:03.005063] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:51.130 [2024-09-30 12:28:03.005365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:51.130 [2024-09-30 12:28:03.005553] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:51.130 [2024-09-30 12:28:03.005598] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:51.130 [2024-09-30 12:28:03.005791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.130 pt3 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.130 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.389 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.389 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.389 "name": "raid_bdev1", 00:10:51.389 "uuid": "792684c4-c9d6-429d-acef-694c6c928c28", 00:10:51.389 "strip_size_kb": 0, 00:10:51.389 "state": "online", 00:10:51.389 "raid_level": "raid1", 00:10:51.389 "superblock": true, 00:10:51.389 "num_base_bdevs": 3, 00:10:51.389 "num_base_bdevs_discovered": 2, 00:10:51.389 "num_base_bdevs_operational": 2, 00:10:51.389 "base_bdevs_list": [ 00:10:51.389 { 00:10:51.389 "name": null, 00:10:51.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.389 "is_configured": false, 00:10:51.389 "data_offset": 2048, 00:10:51.389 "data_size": 63488 00:10:51.389 }, 00:10:51.389 { 00:10:51.389 "name": "pt2", 00:10:51.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.389 "is_configured": true, 00:10:51.389 "data_offset": 2048, 00:10:51.389 "data_size": 63488 00:10:51.389 }, 00:10:51.389 { 00:10:51.389 "name": "pt3", 00:10:51.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.389 "is_configured": true, 00:10:51.389 "data_offset": 2048, 00:10:51.389 "data_size": 63488 00:10:51.389 } 00:10:51.389 ] 00:10:51.389 }' 00:10:51.389 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.389 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.649 [2024-09-30 12:28:03.439578] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 792684c4-c9d6-429d-acef-694c6c928c28 '!=' 792684c4-c9d6-429d-acef-694c6c928c28 ']' 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68519 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 68519 ']' 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 68519 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68519 00:10:51.649 killing process with pid 68519 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68519' 00:10:51.649 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 68519 00:10:51.649 [2024-09-30 12:28:03.524119] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.649 [2024-09-30 12:28:03.524210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.649 [2024-09-30 12:28:03.524276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.649 [2024-09-30 12:28:03.524288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, sta 12:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 68519 00:10:51.649 te offline 00:10:52.218 [2024-09-30 12:28:03.835130] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.642 12:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:53.642 00:10:53.642 real 0m7.628s 00:10:53.642 user 0m11.700s 00:10:53.642 sys 0m1.367s 00:10:53.642 12:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.642 12:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.642 ************************************ 00:10:53.642 END TEST raid_superblock_test 00:10:53.642 ************************************ 00:10:53.642 12:28:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:53.642 12:28:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:53.642 12:28:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.642 12:28:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.642 ************************************ 00:10:53.642 START TEST raid_read_error_test 00:10:53.642 ************************************ 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:53.642 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JDJMfrOGNd 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68965 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68965 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 68965 ']' 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.643 12:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.643 [2024-09-30 12:28:05.351710] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:53.643 [2024-09-30 12:28:05.352444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68965 ] 00:10:53.643 [2024-09-30 12:28:05.519366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.902 [2024-09-30 12:28:05.763676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.161 [2024-09-30 12:28:05.992198] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.161 [2024-09-30 12:28:05.992237] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.420 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.421 BaseBdev1_malloc 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.421 true 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.421 [2024-09-30 12:28:06.228911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:54.421 [2024-09-30 12:28:06.228974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.421 [2024-09-30 12:28:06.228991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:54.421 [2024-09-30 12:28:06.229001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.421 [2024-09-30 12:28:06.231257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.421 [2024-09-30 12:28:06.231369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:54.421 BaseBdev1 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.421 BaseBdev2_malloc 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.421 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.680 true 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.681 [2024-09-30 12:28:06.323028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:54.681 [2024-09-30 12:28:06.323085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.681 [2024-09-30 12:28:06.323102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:54.681 [2024-09-30 12:28:06.323114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.681 [2024-09-30 12:28:06.325476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.681 [2024-09-30 12:28:06.325515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:54.681 BaseBdev2 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.681 BaseBdev3_malloc 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.681 true 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.681 [2024-09-30 12:28:06.395051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:54.681 [2024-09-30 12:28:06.395105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.681 [2024-09-30 12:28:06.395120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:54.681 [2024-09-30 12:28:06.395131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.681 [2024-09-30 12:28:06.397592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.681 [2024-09-30 12:28:06.397667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:54.681 BaseBdev3 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.681 [2024-09-30 12:28:06.407126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.681 [2024-09-30 12:28:06.409180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.681 [2024-09-30 12:28:06.409246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.681 [2024-09-30 12:28:06.409453] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:54.681 [2024-09-30 12:28:06.409465] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:54.681 [2024-09-30 12:28:06.409715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:54.681 [2024-09-30 12:28:06.409885] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:54.681 [2024-09-30 12:28:06.409900] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:54.681 [2024-09-30 12:28:06.410047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.681 "name": "raid_bdev1", 00:10:54.681 "uuid": "951f5683-af38-4cb3-973e-0dc9173b650b", 00:10:54.681 "strip_size_kb": 0, 00:10:54.681 "state": "online", 00:10:54.681 "raid_level": "raid1", 00:10:54.681 "superblock": true, 00:10:54.681 "num_base_bdevs": 3, 00:10:54.681 "num_base_bdevs_discovered": 3, 00:10:54.681 "num_base_bdevs_operational": 3, 00:10:54.681 "base_bdevs_list": [ 00:10:54.681 { 00:10:54.681 "name": "BaseBdev1", 00:10:54.681 "uuid": "856ec352-f86a-5c9a-9aaa-26ea07ca2c84", 00:10:54.681 "is_configured": true, 00:10:54.681 "data_offset": 2048, 00:10:54.681 "data_size": 63488 00:10:54.681 }, 00:10:54.681 { 00:10:54.681 "name": "BaseBdev2", 00:10:54.681 "uuid": "6d3c6c85-8e5f-5bca-ad20-044575ed0e14", 00:10:54.681 "is_configured": true, 00:10:54.681 "data_offset": 2048, 00:10:54.681 "data_size": 63488 00:10:54.681 }, 00:10:54.681 { 00:10:54.681 "name": "BaseBdev3", 00:10:54.681 "uuid": "bccc3daa-94cb-578f-b092-ab1170c838d6", 00:10:54.681 "is_configured": true, 00:10:54.681 "data_offset": 2048, 00:10:54.681 "data_size": 63488 00:10:54.681 } 00:10:54.681 ] 00:10:54.681 }' 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.681 12:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.940 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:54.940 12:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:55.200 [2024-09-30 12:28:06.915708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.138 "name": "raid_bdev1", 00:10:56.138 "uuid": "951f5683-af38-4cb3-973e-0dc9173b650b", 00:10:56.138 "strip_size_kb": 0, 00:10:56.138 "state": "online", 00:10:56.138 "raid_level": "raid1", 00:10:56.138 "superblock": true, 00:10:56.138 "num_base_bdevs": 3, 00:10:56.138 "num_base_bdevs_discovered": 3, 00:10:56.138 "num_base_bdevs_operational": 3, 00:10:56.138 "base_bdevs_list": [ 00:10:56.138 { 00:10:56.138 "name": "BaseBdev1", 00:10:56.138 "uuid": "856ec352-f86a-5c9a-9aaa-26ea07ca2c84", 00:10:56.138 "is_configured": true, 00:10:56.138 "data_offset": 2048, 00:10:56.138 "data_size": 63488 00:10:56.138 }, 00:10:56.138 { 00:10:56.138 "name": "BaseBdev2", 00:10:56.138 "uuid": "6d3c6c85-8e5f-5bca-ad20-044575ed0e14", 00:10:56.138 "is_configured": true, 00:10:56.138 "data_offset": 2048, 00:10:56.138 "data_size": 63488 00:10:56.138 }, 00:10:56.138 { 00:10:56.138 "name": "BaseBdev3", 00:10:56.138 "uuid": "bccc3daa-94cb-578f-b092-ab1170c838d6", 00:10:56.138 "is_configured": true, 00:10:56.138 "data_offset": 2048, 00:10:56.138 "data_size": 63488 00:10:56.138 } 00:10:56.138 ] 00:10:56.138 }' 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.138 12:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.399 12:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:56.399 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.399 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.399 [2024-09-30 12:28:08.262837] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.399 [2024-09-30 12:28:08.262962] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.399 [2024-09-30 12:28:08.265510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.399 [2024-09-30 12:28:08.265600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.399 [2024-09-30 12:28:08.265731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.399 [2024-09-30 12:28:08.265818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:56.399 { 00:10:56.399 "results": [ 00:10:56.399 { 00:10:56.399 "job": "raid_bdev1", 00:10:56.399 "core_mask": "0x1", 00:10:56.399 "workload": "randrw", 00:10:56.399 "percentage": 50, 00:10:56.399 "status": "finished", 00:10:56.399 "queue_depth": 1, 00:10:56.399 "io_size": 131072, 00:10:56.399 "runtime": 1.347729, 00:10:56.399 "iops": 10824.134525561147, 00:10:56.399 "mibps": 1353.0168156951434, 00:10:56.399 "io_failed": 0, 00:10:56.399 "io_timeout": 0, 00:10:56.399 "avg_latency_us": 89.97933481248572, 00:10:56.399 "min_latency_us": 22.246288209606988, 00:10:56.399 "max_latency_us": 1480.9991266375546 00:10:56.399 } 00:10:56.399 ], 00:10:56.399 "core_count": 1 00:10:56.399 } 00:10:56.399 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.399 12:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68965 00:10:56.399 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 68965 ']' 00:10:56.399 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 68965 00:10:56.399 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:56.399 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.399 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68965 00:10:56.659 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.659 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.659 killing process with pid 68965 00:10:56.659 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68965' 00:10:56.659 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 68965 00:10:56.659 [2024-09-30 12:28:08.303681] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.659 12:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 68965 00:10:56.659 [2024-09-30 12:28:08.548861] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.568 12:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JDJMfrOGNd 00:10:58.568 12:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:58.568 12:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:58.568 12:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:58.568 12:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:58.568 ************************************ 00:10:58.568 END TEST raid_read_error_test 00:10:58.568 ************************************ 00:10:58.568 12:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.568 12:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:58.568 12:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:58.568 00:10:58.568 real 0m4.712s 00:10:58.568 user 0m5.364s 00:10:58.568 sys 0m0.672s 00:10:58.568 12:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.568 12:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.568 12:28:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:58.568 12:28:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:58.568 12:28:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.568 12:28:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.568 ************************************ 00:10:58.568 START TEST raid_write_error_test 00:10:58.568 ************************************ 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.q5iAYgfRzd 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69110 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69110 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69110 ']' 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.568 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.568 [2024-09-30 12:28:10.139192] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:58.568 [2024-09-30 12:28:10.139317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69110 ] 00:10:58.568 [2024-09-30 12:28:10.309452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.828 [2024-09-30 12:28:10.550023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.088 [2024-09-30 12:28:10.778230] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.088 [2024-09-30 12:28:10.778274] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.088 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.088 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:59.088 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.088 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:59.088 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.088 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.348 BaseBdev1_malloc 00:10:59.348 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.348 12:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:59.348 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.348 12:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.348 true 00:10:59.348 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.348 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:59.348 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.348 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.348 [2024-09-30 12:28:11.012232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:59.348 [2024-09-30 12:28:11.012368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.348 [2024-09-30 12:28:11.012391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:59.348 [2024-09-30 12:28:11.012403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.348 [2024-09-30 12:28:11.014722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.348 [2024-09-30 12:28:11.014771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:59.348 BaseBdev1 00:10:59.348 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.349 BaseBdev2_malloc 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.349 true 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.349 [2024-09-30 12:28:11.096487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:59.349 [2024-09-30 12:28:11.096544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.349 [2024-09-30 12:28:11.096560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:59.349 [2024-09-30 12:28:11.096571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.349 [2024-09-30 12:28:11.098866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.349 [2024-09-30 12:28:11.098901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:59.349 BaseBdev2 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.349 BaseBdev3_malloc 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.349 true 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.349 [2024-09-30 12:28:11.170047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:59.349 [2024-09-30 12:28:11.170099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.349 [2024-09-30 12:28:11.170116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:59.349 [2024-09-30 12:28:11.170127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.349 [2024-09-30 12:28:11.172447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.349 [2024-09-30 12:28:11.172486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:59.349 BaseBdev3 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.349 [2024-09-30 12:28:11.182079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.349 [2024-09-30 12:28:11.184163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.349 [2024-09-30 12:28:11.184240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.349 [2024-09-30 12:28:11.184440] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:59.349 [2024-09-30 12:28:11.184452] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:59.349 [2024-09-30 12:28:11.184684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:59.349 [2024-09-30 12:28:11.184860] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:59.349 [2024-09-30 12:28:11.184874] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:59.349 [2024-09-30 12:28:11.185007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.349 "name": "raid_bdev1", 00:10:59.349 "uuid": "65dd8d96-8b3f-4900-9b57-c3111929b146", 00:10:59.349 "strip_size_kb": 0, 00:10:59.349 "state": "online", 00:10:59.349 "raid_level": "raid1", 00:10:59.349 "superblock": true, 00:10:59.349 "num_base_bdevs": 3, 00:10:59.349 "num_base_bdevs_discovered": 3, 00:10:59.349 "num_base_bdevs_operational": 3, 00:10:59.349 "base_bdevs_list": [ 00:10:59.349 { 00:10:59.349 "name": "BaseBdev1", 00:10:59.349 "uuid": "42ded159-01e8-5aba-8ad2-12da2f8b3533", 00:10:59.349 "is_configured": true, 00:10:59.349 "data_offset": 2048, 00:10:59.349 "data_size": 63488 00:10:59.349 }, 00:10:59.349 { 00:10:59.349 "name": "BaseBdev2", 00:10:59.349 "uuid": "02f2d80e-49ac-53d5-b710-d79ec15ebd28", 00:10:59.349 "is_configured": true, 00:10:59.349 "data_offset": 2048, 00:10:59.349 "data_size": 63488 00:10:59.349 }, 00:10:59.349 { 00:10:59.349 "name": "BaseBdev3", 00:10:59.349 "uuid": "0316e124-d517-5d6e-8b02-70a505b84c65", 00:10:59.349 "is_configured": true, 00:10:59.349 "data_offset": 2048, 00:10:59.349 "data_size": 63488 00:10:59.349 } 00:10:59.349 ] 00:10:59.349 }' 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.349 12:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.919 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:59.919 12:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:59.919 [2024-09-30 12:28:11.726535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.860 [2024-09-30 12:28:12.667209] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:00.860 [2024-09-30 12:28:12.667397] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.860 [2024-09-30 12:28:12.667660] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.860 "name": "raid_bdev1", 00:11:00.860 "uuid": "65dd8d96-8b3f-4900-9b57-c3111929b146", 00:11:00.860 "strip_size_kb": 0, 00:11:00.860 "state": "online", 00:11:00.860 "raid_level": "raid1", 00:11:00.860 "superblock": true, 00:11:00.860 "num_base_bdevs": 3, 00:11:00.860 "num_base_bdevs_discovered": 2, 00:11:00.860 "num_base_bdevs_operational": 2, 00:11:00.860 "base_bdevs_list": [ 00:11:00.860 { 00:11:00.860 "name": null, 00:11:00.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.860 "is_configured": false, 00:11:00.860 "data_offset": 0, 00:11:00.860 "data_size": 63488 00:11:00.860 }, 00:11:00.860 { 00:11:00.860 "name": "BaseBdev2", 00:11:00.860 "uuid": "02f2d80e-49ac-53d5-b710-d79ec15ebd28", 00:11:00.860 "is_configured": true, 00:11:00.860 "data_offset": 2048, 00:11:00.860 "data_size": 63488 00:11:00.860 }, 00:11:00.860 { 00:11:00.860 "name": "BaseBdev3", 00:11:00.860 "uuid": "0316e124-d517-5d6e-8b02-70a505b84c65", 00:11:00.860 "is_configured": true, 00:11:00.860 "data_offset": 2048, 00:11:00.860 "data_size": 63488 00:11:00.860 } 00:11:00.860 ] 00:11:00.860 }' 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.860 12:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.430 [2024-09-30 12:28:13.123956] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.430 [2024-09-30 12:28:13.124096] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.430 [2024-09-30 12:28:13.126676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.430 [2024-09-30 12:28:13.126779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.430 [2024-09-30 12:28:13.126883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.430 [2024-09-30 12:28:13.126942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.430 { 00:11:01.430 "results": [ 00:11:01.430 { 00:11:01.430 "job": "raid_bdev1", 00:11:01.430 "core_mask": "0x1", 00:11:01.430 "workload": "randrw", 00:11:01.430 "percentage": 50, 00:11:01.430 "status": "finished", 00:11:01.430 "queue_depth": 1, 00:11:01.430 "io_size": 131072, 00:11:01.430 "runtime": 1.398085, 00:11:01.430 "iops": 11933.466134033339, 00:11:01.430 "mibps": 1491.6832667541673, 00:11:01.430 "io_failed": 0, 00:11:01.430 "io_timeout": 0, 00:11:01.430 "avg_latency_us": 81.2784602354163, 00:11:01.430 "min_latency_us": 22.581659388646287, 00:11:01.430 "max_latency_us": 1466.6899563318777 00:11:01.430 } 00:11:01.430 ], 00:11:01.430 "core_count": 1 00:11:01.430 } 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69110 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69110 ']' 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69110 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69110 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:01.430 killing process with pid 69110 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69110' 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69110 00:11:01.430 [2024-09-30 12:28:13.172524] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.430 12:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69110 00:11:01.690 [2024-09-30 12:28:13.414776] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.072 12:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.q5iAYgfRzd 00:11:03.072 12:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:03.072 12:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:03.072 12:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:03.072 12:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:03.072 12:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.072 12:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:03.072 12:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:03.072 00:11:03.072 real 0m4.797s 00:11:03.072 user 0m5.512s 00:11:03.072 sys 0m0.686s 00:11:03.072 12:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.072 ************************************ 00:11:03.072 END TEST raid_write_error_test 00:11:03.072 ************************************ 00:11:03.072 12:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.072 12:28:14 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:03.072 12:28:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:03.072 12:28:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:03.072 12:28:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:03.072 12:28:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.072 12:28:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.072 ************************************ 00:11:03.072 START TEST raid_state_function_test 00:11:03.072 ************************************ 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69254 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69254' 00:11:03.072 Process raid pid: 69254 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69254 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69254 ']' 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.072 12:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.331 [2024-09-30 12:28:14.999804] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:03.331 [2024-09-30 12:28:14.999988] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.331 [2024-09-30 12:28:15.170429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.589 [2024-09-30 12:28:15.425680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.848 [2024-09-30 12:28:15.666436] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.848 [2024-09-30 12:28:15.666473] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.107 [2024-09-30 12:28:15.825202] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.107 [2024-09-30 12:28:15.825266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.107 [2024-09-30 12:28:15.825276] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.107 [2024-09-30 12:28:15.825286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.107 [2024-09-30 12:28:15.825292] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.107 [2024-09-30 12:28:15.825301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.107 [2024-09-30 12:28:15.825307] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.107 [2024-09-30 12:28:15.825317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.107 "name": "Existed_Raid", 00:11:04.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.107 "strip_size_kb": 64, 00:11:04.107 "state": "configuring", 00:11:04.107 "raid_level": "raid0", 00:11:04.107 "superblock": false, 00:11:04.107 "num_base_bdevs": 4, 00:11:04.107 "num_base_bdevs_discovered": 0, 00:11:04.107 "num_base_bdevs_operational": 4, 00:11:04.107 "base_bdevs_list": [ 00:11:04.107 { 00:11:04.107 "name": "BaseBdev1", 00:11:04.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.107 "is_configured": false, 00:11:04.107 "data_offset": 0, 00:11:04.107 "data_size": 0 00:11:04.107 }, 00:11:04.107 { 00:11:04.107 "name": "BaseBdev2", 00:11:04.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.107 "is_configured": false, 00:11:04.107 "data_offset": 0, 00:11:04.107 "data_size": 0 00:11:04.107 }, 00:11:04.107 { 00:11:04.107 "name": "BaseBdev3", 00:11:04.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.107 "is_configured": false, 00:11:04.107 "data_offset": 0, 00:11:04.107 "data_size": 0 00:11:04.107 }, 00:11:04.107 { 00:11:04.107 "name": "BaseBdev4", 00:11:04.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.107 "is_configured": false, 00:11:04.107 "data_offset": 0, 00:11:04.107 "data_size": 0 00:11:04.107 } 00:11:04.107 ] 00:11:04.107 }' 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.107 12:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.365 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.365 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.365 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.365 [2024-09-30 12:28:16.236403] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.366 [2024-09-30 12:28:16.236510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:04.366 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.366 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.366 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.366 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.366 [2024-09-30 12:28:16.248410] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.366 [2024-09-30 12:28:16.248491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.366 [2024-09-30 12:28:16.248517] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.366 [2024-09-30 12:28:16.248540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.366 [2024-09-30 12:28:16.248558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.366 [2024-09-30 12:28:16.248578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.366 [2024-09-30 12:28:16.248596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.366 [2024-09-30 12:28:16.248616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.366 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.366 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:04.366 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.366 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.625 [2024-09-30 12:28:16.313145] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.625 BaseBdev1 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.625 [ 00:11:04.625 { 00:11:04.625 "name": "BaseBdev1", 00:11:04.625 "aliases": [ 00:11:04.625 "f2577c09-72f3-465a-8cbf-64640d1d442b" 00:11:04.625 ], 00:11:04.625 "product_name": "Malloc disk", 00:11:04.625 "block_size": 512, 00:11:04.625 "num_blocks": 65536, 00:11:04.625 "uuid": "f2577c09-72f3-465a-8cbf-64640d1d442b", 00:11:04.625 "assigned_rate_limits": { 00:11:04.625 "rw_ios_per_sec": 0, 00:11:04.625 "rw_mbytes_per_sec": 0, 00:11:04.625 "r_mbytes_per_sec": 0, 00:11:04.625 "w_mbytes_per_sec": 0 00:11:04.625 }, 00:11:04.625 "claimed": true, 00:11:04.625 "claim_type": "exclusive_write", 00:11:04.625 "zoned": false, 00:11:04.625 "supported_io_types": { 00:11:04.625 "read": true, 00:11:04.625 "write": true, 00:11:04.625 "unmap": true, 00:11:04.625 "flush": true, 00:11:04.625 "reset": true, 00:11:04.625 "nvme_admin": false, 00:11:04.625 "nvme_io": false, 00:11:04.625 "nvme_io_md": false, 00:11:04.625 "write_zeroes": true, 00:11:04.625 "zcopy": true, 00:11:04.625 "get_zone_info": false, 00:11:04.625 "zone_management": false, 00:11:04.625 "zone_append": false, 00:11:04.625 "compare": false, 00:11:04.625 "compare_and_write": false, 00:11:04.625 "abort": true, 00:11:04.625 "seek_hole": false, 00:11:04.625 "seek_data": false, 00:11:04.625 "copy": true, 00:11:04.625 "nvme_iov_md": false 00:11:04.625 }, 00:11:04.625 "memory_domains": [ 00:11:04.625 { 00:11:04.625 "dma_device_id": "system", 00:11:04.625 "dma_device_type": 1 00:11:04.625 }, 00:11:04.625 { 00:11:04.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.625 "dma_device_type": 2 00:11:04.625 } 00:11:04.625 ], 00:11:04.625 "driver_specific": {} 00:11:04.625 } 00:11:04.625 ] 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.625 "name": "Existed_Raid", 00:11:04.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.625 "strip_size_kb": 64, 00:11:04.625 "state": "configuring", 00:11:04.625 "raid_level": "raid0", 00:11:04.625 "superblock": false, 00:11:04.625 "num_base_bdevs": 4, 00:11:04.625 "num_base_bdevs_discovered": 1, 00:11:04.625 "num_base_bdevs_operational": 4, 00:11:04.625 "base_bdevs_list": [ 00:11:04.625 { 00:11:04.625 "name": "BaseBdev1", 00:11:04.625 "uuid": "f2577c09-72f3-465a-8cbf-64640d1d442b", 00:11:04.625 "is_configured": true, 00:11:04.625 "data_offset": 0, 00:11:04.625 "data_size": 65536 00:11:04.625 }, 00:11:04.625 { 00:11:04.625 "name": "BaseBdev2", 00:11:04.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.625 "is_configured": false, 00:11:04.625 "data_offset": 0, 00:11:04.625 "data_size": 0 00:11:04.625 }, 00:11:04.625 { 00:11:04.625 "name": "BaseBdev3", 00:11:04.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.625 "is_configured": false, 00:11:04.625 "data_offset": 0, 00:11:04.625 "data_size": 0 00:11:04.625 }, 00:11:04.625 { 00:11:04.625 "name": "BaseBdev4", 00:11:04.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.625 "is_configured": false, 00:11:04.625 "data_offset": 0, 00:11:04.625 "data_size": 0 00:11:04.625 } 00:11:04.625 ] 00:11:04.625 }' 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.625 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.192 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.192 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.192 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.192 [2024-09-30 12:28:16.808309] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.192 [2024-09-30 12:28:16.808365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:05.192 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.192 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.192 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.192 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.192 [2024-09-30 12:28:16.820340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.192 [2024-09-30 12:28:16.822473] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.192 [2024-09-30 12:28:16.822516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.192 [2024-09-30 12:28:16.822525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.192 [2024-09-30 12:28:16.822536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.193 [2024-09-30 12:28:16.822542] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.193 [2024-09-30 12:28:16.822550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.193 "name": "Existed_Raid", 00:11:05.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.193 "strip_size_kb": 64, 00:11:05.193 "state": "configuring", 00:11:05.193 "raid_level": "raid0", 00:11:05.193 "superblock": false, 00:11:05.193 "num_base_bdevs": 4, 00:11:05.193 "num_base_bdevs_discovered": 1, 00:11:05.193 "num_base_bdevs_operational": 4, 00:11:05.193 "base_bdevs_list": [ 00:11:05.193 { 00:11:05.193 "name": "BaseBdev1", 00:11:05.193 "uuid": "f2577c09-72f3-465a-8cbf-64640d1d442b", 00:11:05.193 "is_configured": true, 00:11:05.193 "data_offset": 0, 00:11:05.193 "data_size": 65536 00:11:05.193 }, 00:11:05.193 { 00:11:05.193 "name": "BaseBdev2", 00:11:05.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.193 "is_configured": false, 00:11:05.193 "data_offset": 0, 00:11:05.193 "data_size": 0 00:11:05.193 }, 00:11:05.193 { 00:11:05.193 "name": "BaseBdev3", 00:11:05.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.193 "is_configured": false, 00:11:05.193 "data_offset": 0, 00:11:05.193 "data_size": 0 00:11:05.193 }, 00:11:05.193 { 00:11:05.193 "name": "BaseBdev4", 00:11:05.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.193 "is_configured": false, 00:11:05.193 "data_offset": 0, 00:11:05.193 "data_size": 0 00:11:05.193 } 00:11:05.193 ] 00:11:05.193 }' 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.193 12:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.452 [2024-09-30 12:28:17.293616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.452 BaseBdev2 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.452 [ 00:11:05.452 { 00:11:05.452 "name": "BaseBdev2", 00:11:05.452 "aliases": [ 00:11:05.452 "5ecca7c7-525d-49de-b554-d0b897e8e428" 00:11:05.452 ], 00:11:05.452 "product_name": "Malloc disk", 00:11:05.452 "block_size": 512, 00:11:05.452 "num_blocks": 65536, 00:11:05.452 "uuid": "5ecca7c7-525d-49de-b554-d0b897e8e428", 00:11:05.452 "assigned_rate_limits": { 00:11:05.452 "rw_ios_per_sec": 0, 00:11:05.452 "rw_mbytes_per_sec": 0, 00:11:05.452 "r_mbytes_per_sec": 0, 00:11:05.452 "w_mbytes_per_sec": 0 00:11:05.452 }, 00:11:05.452 "claimed": true, 00:11:05.452 "claim_type": "exclusive_write", 00:11:05.452 "zoned": false, 00:11:05.452 "supported_io_types": { 00:11:05.452 "read": true, 00:11:05.452 "write": true, 00:11:05.452 "unmap": true, 00:11:05.452 "flush": true, 00:11:05.452 "reset": true, 00:11:05.452 "nvme_admin": false, 00:11:05.452 "nvme_io": false, 00:11:05.452 "nvme_io_md": false, 00:11:05.452 "write_zeroes": true, 00:11:05.452 "zcopy": true, 00:11:05.452 "get_zone_info": false, 00:11:05.452 "zone_management": false, 00:11:05.452 "zone_append": false, 00:11:05.452 "compare": false, 00:11:05.452 "compare_and_write": false, 00:11:05.452 "abort": true, 00:11:05.452 "seek_hole": false, 00:11:05.452 "seek_data": false, 00:11:05.452 "copy": true, 00:11:05.452 "nvme_iov_md": false 00:11:05.452 }, 00:11:05.452 "memory_domains": [ 00:11:05.452 { 00:11:05.452 "dma_device_id": "system", 00:11:05.452 "dma_device_type": 1 00:11:05.452 }, 00:11:05.452 { 00:11:05.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.452 "dma_device_type": 2 00:11:05.452 } 00:11:05.452 ], 00:11:05.452 "driver_specific": {} 00:11:05.452 } 00:11:05.452 ] 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.452 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.712 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.712 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.712 "name": "Existed_Raid", 00:11:05.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.712 "strip_size_kb": 64, 00:11:05.712 "state": "configuring", 00:11:05.712 "raid_level": "raid0", 00:11:05.712 "superblock": false, 00:11:05.712 "num_base_bdevs": 4, 00:11:05.712 "num_base_bdevs_discovered": 2, 00:11:05.712 "num_base_bdevs_operational": 4, 00:11:05.712 "base_bdevs_list": [ 00:11:05.712 { 00:11:05.712 "name": "BaseBdev1", 00:11:05.712 "uuid": "f2577c09-72f3-465a-8cbf-64640d1d442b", 00:11:05.712 "is_configured": true, 00:11:05.712 "data_offset": 0, 00:11:05.712 "data_size": 65536 00:11:05.712 }, 00:11:05.712 { 00:11:05.712 "name": "BaseBdev2", 00:11:05.712 "uuid": "5ecca7c7-525d-49de-b554-d0b897e8e428", 00:11:05.712 "is_configured": true, 00:11:05.712 "data_offset": 0, 00:11:05.712 "data_size": 65536 00:11:05.712 }, 00:11:05.712 { 00:11:05.712 "name": "BaseBdev3", 00:11:05.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.712 "is_configured": false, 00:11:05.712 "data_offset": 0, 00:11:05.712 "data_size": 0 00:11:05.712 }, 00:11:05.712 { 00:11:05.712 "name": "BaseBdev4", 00:11:05.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.712 "is_configured": false, 00:11:05.712 "data_offset": 0, 00:11:05.712 "data_size": 0 00:11:05.712 } 00:11:05.712 ] 00:11:05.712 }' 00:11:05.712 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.712 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.972 [2024-09-30 12:28:17.812383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.972 BaseBdev3 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.972 [ 00:11:05.972 { 00:11:05.972 "name": "BaseBdev3", 00:11:05.972 "aliases": [ 00:11:05.972 "cac158d0-9a9e-4c41-ad33-04ccede31a46" 00:11:05.972 ], 00:11:05.972 "product_name": "Malloc disk", 00:11:05.972 "block_size": 512, 00:11:05.972 "num_blocks": 65536, 00:11:05.972 "uuid": "cac158d0-9a9e-4c41-ad33-04ccede31a46", 00:11:05.972 "assigned_rate_limits": { 00:11:05.972 "rw_ios_per_sec": 0, 00:11:05.972 "rw_mbytes_per_sec": 0, 00:11:05.972 "r_mbytes_per_sec": 0, 00:11:05.972 "w_mbytes_per_sec": 0 00:11:05.972 }, 00:11:05.972 "claimed": true, 00:11:05.972 "claim_type": "exclusive_write", 00:11:05.972 "zoned": false, 00:11:05.972 "supported_io_types": { 00:11:05.972 "read": true, 00:11:05.972 "write": true, 00:11:05.972 "unmap": true, 00:11:05.972 "flush": true, 00:11:05.972 "reset": true, 00:11:05.972 "nvme_admin": false, 00:11:05.972 "nvme_io": false, 00:11:05.972 "nvme_io_md": false, 00:11:05.972 "write_zeroes": true, 00:11:05.972 "zcopy": true, 00:11:05.972 "get_zone_info": false, 00:11:05.972 "zone_management": false, 00:11:05.972 "zone_append": false, 00:11:05.972 "compare": false, 00:11:05.972 "compare_and_write": false, 00:11:05.972 "abort": true, 00:11:05.972 "seek_hole": false, 00:11:05.972 "seek_data": false, 00:11:05.972 "copy": true, 00:11:05.972 "nvme_iov_md": false 00:11:05.972 }, 00:11:05.972 "memory_domains": [ 00:11:05.972 { 00:11:05.972 "dma_device_id": "system", 00:11:05.972 "dma_device_type": 1 00:11:05.972 }, 00:11:05.972 { 00:11:05.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.972 "dma_device_type": 2 00:11:05.972 } 00:11:05.972 ], 00:11:05.972 "driver_specific": {} 00:11:05.972 } 00:11:05.972 ] 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.972 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.973 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.973 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.973 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.973 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.973 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.973 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.973 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.973 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.973 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.973 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.973 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.232 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.232 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.232 "name": "Existed_Raid", 00:11:06.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.232 "strip_size_kb": 64, 00:11:06.232 "state": "configuring", 00:11:06.232 "raid_level": "raid0", 00:11:06.232 "superblock": false, 00:11:06.232 "num_base_bdevs": 4, 00:11:06.232 "num_base_bdevs_discovered": 3, 00:11:06.232 "num_base_bdevs_operational": 4, 00:11:06.232 "base_bdevs_list": [ 00:11:06.232 { 00:11:06.232 "name": "BaseBdev1", 00:11:06.232 "uuid": "f2577c09-72f3-465a-8cbf-64640d1d442b", 00:11:06.232 "is_configured": true, 00:11:06.232 "data_offset": 0, 00:11:06.232 "data_size": 65536 00:11:06.232 }, 00:11:06.232 { 00:11:06.232 "name": "BaseBdev2", 00:11:06.232 "uuid": "5ecca7c7-525d-49de-b554-d0b897e8e428", 00:11:06.232 "is_configured": true, 00:11:06.232 "data_offset": 0, 00:11:06.232 "data_size": 65536 00:11:06.232 }, 00:11:06.232 { 00:11:06.232 "name": "BaseBdev3", 00:11:06.232 "uuid": "cac158d0-9a9e-4c41-ad33-04ccede31a46", 00:11:06.232 "is_configured": true, 00:11:06.232 "data_offset": 0, 00:11:06.232 "data_size": 65536 00:11:06.232 }, 00:11:06.232 { 00:11:06.232 "name": "BaseBdev4", 00:11:06.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.232 "is_configured": false, 00:11:06.232 "data_offset": 0, 00:11:06.232 "data_size": 0 00:11:06.232 } 00:11:06.232 ] 00:11:06.232 }' 00:11:06.232 12:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.232 12:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.492 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:06.492 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.492 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.492 [2024-09-30 12:28:18.317829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.492 [2024-09-30 12:28:18.317956] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:06.493 [2024-09-30 12:28:18.317981] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:06.493 [2024-09-30 12:28:18.318300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:06.493 [2024-09-30 12:28:18.318521] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:06.493 [2024-09-30 12:28:18.318570] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:06.493 [2024-09-30 12:28:18.318893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.493 BaseBdev4 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.493 [ 00:11:06.493 { 00:11:06.493 "name": "BaseBdev4", 00:11:06.493 "aliases": [ 00:11:06.493 "07e7896e-2aad-4fc2-aaf0-8f38b9af8c18" 00:11:06.493 ], 00:11:06.493 "product_name": "Malloc disk", 00:11:06.493 "block_size": 512, 00:11:06.493 "num_blocks": 65536, 00:11:06.493 "uuid": "07e7896e-2aad-4fc2-aaf0-8f38b9af8c18", 00:11:06.493 "assigned_rate_limits": { 00:11:06.493 "rw_ios_per_sec": 0, 00:11:06.493 "rw_mbytes_per_sec": 0, 00:11:06.493 "r_mbytes_per_sec": 0, 00:11:06.493 "w_mbytes_per_sec": 0 00:11:06.493 }, 00:11:06.493 "claimed": true, 00:11:06.493 "claim_type": "exclusive_write", 00:11:06.493 "zoned": false, 00:11:06.493 "supported_io_types": { 00:11:06.493 "read": true, 00:11:06.493 "write": true, 00:11:06.493 "unmap": true, 00:11:06.493 "flush": true, 00:11:06.493 "reset": true, 00:11:06.493 "nvme_admin": false, 00:11:06.493 "nvme_io": false, 00:11:06.493 "nvme_io_md": false, 00:11:06.493 "write_zeroes": true, 00:11:06.493 "zcopy": true, 00:11:06.493 "get_zone_info": false, 00:11:06.493 "zone_management": false, 00:11:06.493 "zone_append": false, 00:11:06.493 "compare": false, 00:11:06.493 "compare_and_write": false, 00:11:06.493 "abort": true, 00:11:06.493 "seek_hole": false, 00:11:06.493 "seek_data": false, 00:11:06.493 "copy": true, 00:11:06.493 "nvme_iov_md": false 00:11:06.493 }, 00:11:06.493 "memory_domains": [ 00:11:06.493 { 00:11:06.493 "dma_device_id": "system", 00:11:06.493 "dma_device_type": 1 00:11:06.493 }, 00:11:06.493 { 00:11:06.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.493 "dma_device_type": 2 00:11:06.493 } 00:11:06.493 ], 00:11:06.493 "driver_specific": {} 00:11:06.493 } 00:11:06.493 ] 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.493 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.753 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.753 "name": "Existed_Raid", 00:11:06.753 "uuid": "f850985e-53f4-4da3-8f99-78e05e952704", 00:11:06.753 "strip_size_kb": 64, 00:11:06.753 "state": "online", 00:11:06.753 "raid_level": "raid0", 00:11:06.753 "superblock": false, 00:11:06.753 "num_base_bdevs": 4, 00:11:06.753 "num_base_bdevs_discovered": 4, 00:11:06.753 "num_base_bdevs_operational": 4, 00:11:06.753 "base_bdevs_list": [ 00:11:06.753 { 00:11:06.753 "name": "BaseBdev1", 00:11:06.753 "uuid": "f2577c09-72f3-465a-8cbf-64640d1d442b", 00:11:06.753 "is_configured": true, 00:11:06.753 "data_offset": 0, 00:11:06.753 "data_size": 65536 00:11:06.753 }, 00:11:06.753 { 00:11:06.753 "name": "BaseBdev2", 00:11:06.753 "uuid": "5ecca7c7-525d-49de-b554-d0b897e8e428", 00:11:06.753 "is_configured": true, 00:11:06.753 "data_offset": 0, 00:11:06.753 "data_size": 65536 00:11:06.753 }, 00:11:06.753 { 00:11:06.753 "name": "BaseBdev3", 00:11:06.753 "uuid": "cac158d0-9a9e-4c41-ad33-04ccede31a46", 00:11:06.753 "is_configured": true, 00:11:06.753 "data_offset": 0, 00:11:06.753 "data_size": 65536 00:11:06.753 }, 00:11:06.753 { 00:11:06.753 "name": "BaseBdev4", 00:11:06.753 "uuid": "07e7896e-2aad-4fc2-aaf0-8f38b9af8c18", 00:11:06.753 "is_configured": true, 00:11:06.753 "data_offset": 0, 00:11:06.753 "data_size": 65536 00:11:06.753 } 00:11:06.753 ] 00:11:06.753 }' 00:11:06.753 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.753 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.013 [2024-09-30 12:28:18.789340] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.013 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.013 "name": "Existed_Raid", 00:11:07.013 "aliases": [ 00:11:07.013 "f850985e-53f4-4da3-8f99-78e05e952704" 00:11:07.013 ], 00:11:07.013 "product_name": "Raid Volume", 00:11:07.013 "block_size": 512, 00:11:07.013 "num_blocks": 262144, 00:11:07.013 "uuid": "f850985e-53f4-4da3-8f99-78e05e952704", 00:11:07.013 "assigned_rate_limits": { 00:11:07.013 "rw_ios_per_sec": 0, 00:11:07.013 "rw_mbytes_per_sec": 0, 00:11:07.013 "r_mbytes_per_sec": 0, 00:11:07.013 "w_mbytes_per_sec": 0 00:11:07.013 }, 00:11:07.013 "claimed": false, 00:11:07.013 "zoned": false, 00:11:07.013 "supported_io_types": { 00:11:07.013 "read": true, 00:11:07.013 "write": true, 00:11:07.013 "unmap": true, 00:11:07.013 "flush": true, 00:11:07.013 "reset": true, 00:11:07.013 "nvme_admin": false, 00:11:07.013 "nvme_io": false, 00:11:07.013 "nvme_io_md": false, 00:11:07.013 "write_zeroes": true, 00:11:07.013 "zcopy": false, 00:11:07.013 "get_zone_info": false, 00:11:07.013 "zone_management": false, 00:11:07.013 "zone_append": false, 00:11:07.013 "compare": false, 00:11:07.013 "compare_and_write": false, 00:11:07.013 "abort": false, 00:11:07.013 "seek_hole": false, 00:11:07.013 "seek_data": false, 00:11:07.013 "copy": false, 00:11:07.013 "nvme_iov_md": false 00:11:07.013 }, 00:11:07.013 "memory_domains": [ 00:11:07.013 { 00:11:07.013 "dma_device_id": "system", 00:11:07.013 "dma_device_type": 1 00:11:07.013 }, 00:11:07.013 { 00:11:07.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.013 "dma_device_type": 2 00:11:07.013 }, 00:11:07.013 { 00:11:07.013 "dma_device_id": "system", 00:11:07.013 "dma_device_type": 1 00:11:07.013 }, 00:11:07.013 { 00:11:07.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.013 "dma_device_type": 2 00:11:07.013 }, 00:11:07.013 { 00:11:07.013 "dma_device_id": "system", 00:11:07.013 "dma_device_type": 1 00:11:07.013 }, 00:11:07.013 { 00:11:07.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.013 "dma_device_type": 2 00:11:07.013 }, 00:11:07.013 { 00:11:07.013 "dma_device_id": "system", 00:11:07.013 "dma_device_type": 1 00:11:07.013 }, 00:11:07.013 { 00:11:07.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.013 "dma_device_type": 2 00:11:07.013 } 00:11:07.013 ], 00:11:07.013 "driver_specific": { 00:11:07.013 "raid": { 00:11:07.013 "uuid": "f850985e-53f4-4da3-8f99-78e05e952704", 00:11:07.013 "strip_size_kb": 64, 00:11:07.013 "state": "online", 00:11:07.013 "raid_level": "raid0", 00:11:07.013 "superblock": false, 00:11:07.013 "num_base_bdevs": 4, 00:11:07.013 "num_base_bdevs_discovered": 4, 00:11:07.013 "num_base_bdevs_operational": 4, 00:11:07.013 "base_bdevs_list": [ 00:11:07.013 { 00:11:07.013 "name": "BaseBdev1", 00:11:07.013 "uuid": "f2577c09-72f3-465a-8cbf-64640d1d442b", 00:11:07.013 "is_configured": true, 00:11:07.013 "data_offset": 0, 00:11:07.013 "data_size": 65536 00:11:07.013 }, 00:11:07.013 { 00:11:07.013 "name": "BaseBdev2", 00:11:07.013 "uuid": "5ecca7c7-525d-49de-b554-d0b897e8e428", 00:11:07.013 "is_configured": true, 00:11:07.013 "data_offset": 0, 00:11:07.013 "data_size": 65536 00:11:07.013 }, 00:11:07.013 { 00:11:07.013 "name": "BaseBdev3", 00:11:07.013 "uuid": "cac158d0-9a9e-4c41-ad33-04ccede31a46", 00:11:07.013 "is_configured": true, 00:11:07.013 "data_offset": 0, 00:11:07.013 "data_size": 65536 00:11:07.013 }, 00:11:07.013 { 00:11:07.013 "name": "BaseBdev4", 00:11:07.014 "uuid": "07e7896e-2aad-4fc2-aaf0-8f38b9af8c18", 00:11:07.014 "is_configured": true, 00:11:07.014 "data_offset": 0, 00:11:07.014 "data_size": 65536 00:11:07.014 } 00:11:07.014 ] 00:11:07.014 } 00:11:07.014 } 00:11:07.014 }' 00:11:07.014 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.014 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:07.014 BaseBdev2 00:11:07.014 BaseBdev3 00:11:07.014 BaseBdev4' 00:11:07.014 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.274 12:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.274 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.274 [2024-09-30 12:28:19.104510] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.274 [2024-09-30 12:28:19.104540] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.274 [2024-09-30 12:28:19.104592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.596 "name": "Existed_Raid", 00:11:07.596 "uuid": "f850985e-53f4-4da3-8f99-78e05e952704", 00:11:07.596 "strip_size_kb": 64, 00:11:07.596 "state": "offline", 00:11:07.596 "raid_level": "raid0", 00:11:07.596 "superblock": false, 00:11:07.596 "num_base_bdevs": 4, 00:11:07.596 "num_base_bdevs_discovered": 3, 00:11:07.596 "num_base_bdevs_operational": 3, 00:11:07.596 "base_bdevs_list": [ 00:11:07.596 { 00:11:07.596 "name": null, 00:11:07.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.596 "is_configured": false, 00:11:07.596 "data_offset": 0, 00:11:07.596 "data_size": 65536 00:11:07.596 }, 00:11:07.596 { 00:11:07.596 "name": "BaseBdev2", 00:11:07.596 "uuid": "5ecca7c7-525d-49de-b554-d0b897e8e428", 00:11:07.596 "is_configured": true, 00:11:07.596 "data_offset": 0, 00:11:07.596 "data_size": 65536 00:11:07.596 }, 00:11:07.596 { 00:11:07.596 "name": "BaseBdev3", 00:11:07.596 "uuid": "cac158d0-9a9e-4c41-ad33-04ccede31a46", 00:11:07.596 "is_configured": true, 00:11:07.596 "data_offset": 0, 00:11:07.596 "data_size": 65536 00:11:07.596 }, 00:11:07.596 { 00:11:07.596 "name": "BaseBdev4", 00:11:07.596 "uuid": "07e7896e-2aad-4fc2-aaf0-8f38b9af8c18", 00:11:07.596 "is_configured": true, 00:11:07.596 "data_offset": 0, 00:11:07.596 "data_size": 65536 00:11:07.596 } 00:11:07.596 ] 00:11:07.596 }' 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.596 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.867 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.867 [2024-09-30 12:28:19.676710] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.126 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.127 [2024-09-30 12:28:19.835010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.127 12:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.127 [2024-09-30 12:28:20.001066] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:08.127 [2024-09-30 12:28:20.001166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.387 BaseBdev2 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.387 [ 00:11:08.387 { 00:11:08.387 "name": "BaseBdev2", 00:11:08.387 "aliases": [ 00:11:08.387 "698f9473-4261-4d6d-bd21-96cd2a91e2b5" 00:11:08.387 ], 00:11:08.387 "product_name": "Malloc disk", 00:11:08.387 "block_size": 512, 00:11:08.387 "num_blocks": 65536, 00:11:08.387 "uuid": "698f9473-4261-4d6d-bd21-96cd2a91e2b5", 00:11:08.387 "assigned_rate_limits": { 00:11:08.387 "rw_ios_per_sec": 0, 00:11:08.387 "rw_mbytes_per_sec": 0, 00:11:08.387 "r_mbytes_per_sec": 0, 00:11:08.387 "w_mbytes_per_sec": 0 00:11:08.387 }, 00:11:08.387 "claimed": false, 00:11:08.387 "zoned": false, 00:11:08.387 "supported_io_types": { 00:11:08.387 "read": true, 00:11:08.387 "write": true, 00:11:08.387 "unmap": true, 00:11:08.387 "flush": true, 00:11:08.387 "reset": true, 00:11:08.387 "nvme_admin": false, 00:11:08.387 "nvme_io": false, 00:11:08.387 "nvme_io_md": false, 00:11:08.387 "write_zeroes": true, 00:11:08.387 "zcopy": true, 00:11:08.387 "get_zone_info": false, 00:11:08.387 "zone_management": false, 00:11:08.387 "zone_append": false, 00:11:08.387 "compare": false, 00:11:08.387 "compare_and_write": false, 00:11:08.387 "abort": true, 00:11:08.387 "seek_hole": false, 00:11:08.387 "seek_data": false, 00:11:08.387 "copy": true, 00:11:08.387 "nvme_iov_md": false 00:11:08.387 }, 00:11:08.387 "memory_domains": [ 00:11:08.387 { 00:11:08.387 "dma_device_id": "system", 00:11:08.387 "dma_device_type": 1 00:11:08.387 }, 00:11:08.387 { 00:11:08.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.387 "dma_device_type": 2 00:11:08.387 } 00:11:08.387 ], 00:11:08.387 "driver_specific": {} 00:11:08.387 } 00:11:08.387 ] 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.387 BaseBdev3 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.387 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.647 [ 00:11:08.647 { 00:11:08.647 "name": "BaseBdev3", 00:11:08.647 "aliases": [ 00:11:08.647 "c51a4650-8d8e-46ea-aab9-ffe4ca5d8cf1" 00:11:08.647 ], 00:11:08.647 "product_name": "Malloc disk", 00:11:08.647 "block_size": 512, 00:11:08.647 "num_blocks": 65536, 00:11:08.647 "uuid": "c51a4650-8d8e-46ea-aab9-ffe4ca5d8cf1", 00:11:08.647 "assigned_rate_limits": { 00:11:08.647 "rw_ios_per_sec": 0, 00:11:08.647 "rw_mbytes_per_sec": 0, 00:11:08.647 "r_mbytes_per_sec": 0, 00:11:08.647 "w_mbytes_per_sec": 0 00:11:08.647 }, 00:11:08.647 "claimed": false, 00:11:08.647 "zoned": false, 00:11:08.647 "supported_io_types": { 00:11:08.647 "read": true, 00:11:08.647 "write": true, 00:11:08.647 "unmap": true, 00:11:08.647 "flush": true, 00:11:08.647 "reset": true, 00:11:08.647 "nvme_admin": false, 00:11:08.647 "nvme_io": false, 00:11:08.647 "nvme_io_md": false, 00:11:08.647 "write_zeroes": true, 00:11:08.647 "zcopy": true, 00:11:08.647 "get_zone_info": false, 00:11:08.647 "zone_management": false, 00:11:08.647 "zone_append": false, 00:11:08.647 "compare": false, 00:11:08.647 "compare_and_write": false, 00:11:08.647 "abort": true, 00:11:08.647 "seek_hole": false, 00:11:08.647 "seek_data": false, 00:11:08.647 "copy": true, 00:11:08.647 "nvme_iov_md": false 00:11:08.647 }, 00:11:08.647 "memory_domains": [ 00:11:08.647 { 00:11:08.647 "dma_device_id": "system", 00:11:08.647 "dma_device_type": 1 00:11:08.647 }, 00:11:08.647 { 00:11:08.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.647 "dma_device_type": 2 00:11:08.647 } 00:11:08.647 ], 00:11:08.647 "driver_specific": {} 00:11:08.647 } 00:11:08.647 ] 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.647 BaseBdev4 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.647 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.647 [ 00:11:08.647 { 00:11:08.647 "name": "BaseBdev4", 00:11:08.647 "aliases": [ 00:11:08.647 "a471faa6-82b4-496e-860a-c1b15f2a7b97" 00:11:08.647 ], 00:11:08.647 "product_name": "Malloc disk", 00:11:08.647 "block_size": 512, 00:11:08.647 "num_blocks": 65536, 00:11:08.647 "uuid": "a471faa6-82b4-496e-860a-c1b15f2a7b97", 00:11:08.647 "assigned_rate_limits": { 00:11:08.647 "rw_ios_per_sec": 0, 00:11:08.647 "rw_mbytes_per_sec": 0, 00:11:08.647 "r_mbytes_per_sec": 0, 00:11:08.647 "w_mbytes_per_sec": 0 00:11:08.648 }, 00:11:08.648 "claimed": false, 00:11:08.648 "zoned": false, 00:11:08.648 "supported_io_types": { 00:11:08.648 "read": true, 00:11:08.648 "write": true, 00:11:08.648 "unmap": true, 00:11:08.648 "flush": true, 00:11:08.648 "reset": true, 00:11:08.648 "nvme_admin": false, 00:11:08.648 "nvme_io": false, 00:11:08.648 "nvme_io_md": false, 00:11:08.648 "write_zeroes": true, 00:11:08.648 "zcopy": true, 00:11:08.648 "get_zone_info": false, 00:11:08.648 "zone_management": false, 00:11:08.648 "zone_append": false, 00:11:08.648 "compare": false, 00:11:08.648 "compare_and_write": false, 00:11:08.648 "abort": true, 00:11:08.648 "seek_hole": false, 00:11:08.648 "seek_data": false, 00:11:08.648 "copy": true, 00:11:08.648 "nvme_iov_md": false 00:11:08.648 }, 00:11:08.648 "memory_domains": [ 00:11:08.648 { 00:11:08.648 "dma_device_id": "system", 00:11:08.648 "dma_device_type": 1 00:11:08.648 }, 00:11:08.648 { 00:11:08.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.648 "dma_device_type": 2 00:11:08.648 } 00:11:08.648 ], 00:11:08.648 "driver_specific": {} 00:11:08.648 } 00:11:08.648 ] 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.648 [2024-09-30 12:28:20.403864] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.648 [2024-09-30 12:28:20.403983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.648 [2024-09-30 12:28:20.404025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.648 [2024-09-30 12:28:20.406101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.648 [2024-09-30 12:28:20.406203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.648 "name": "Existed_Raid", 00:11:08.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.648 "strip_size_kb": 64, 00:11:08.648 "state": "configuring", 00:11:08.648 "raid_level": "raid0", 00:11:08.648 "superblock": false, 00:11:08.648 "num_base_bdevs": 4, 00:11:08.648 "num_base_bdevs_discovered": 3, 00:11:08.648 "num_base_bdevs_operational": 4, 00:11:08.648 "base_bdevs_list": [ 00:11:08.648 { 00:11:08.648 "name": "BaseBdev1", 00:11:08.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.648 "is_configured": false, 00:11:08.648 "data_offset": 0, 00:11:08.648 "data_size": 0 00:11:08.648 }, 00:11:08.648 { 00:11:08.648 "name": "BaseBdev2", 00:11:08.648 "uuid": "698f9473-4261-4d6d-bd21-96cd2a91e2b5", 00:11:08.648 "is_configured": true, 00:11:08.648 "data_offset": 0, 00:11:08.648 "data_size": 65536 00:11:08.648 }, 00:11:08.648 { 00:11:08.648 "name": "BaseBdev3", 00:11:08.648 "uuid": "c51a4650-8d8e-46ea-aab9-ffe4ca5d8cf1", 00:11:08.648 "is_configured": true, 00:11:08.648 "data_offset": 0, 00:11:08.648 "data_size": 65536 00:11:08.648 }, 00:11:08.648 { 00:11:08.648 "name": "BaseBdev4", 00:11:08.648 "uuid": "a471faa6-82b4-496e-860a-c1b15f2a7b97", 00:11:08.648 "is_configured": true, 00:11:08.648 "data_offset": 0, 00:11:08.648 "data_size": 65536 00:11:08.648 } 00:11:08.648 ] 00:11:08.648 }' 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.648 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.216 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:09.216 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.216 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.216 [2024-09-30 12:28:20.823116] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.216 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.216 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:09.216 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.216 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.216 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.217 "name": "Existed_Raid", 00:11:09.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.217 "strip_size_kb": 64, 00:11:09.217 "state": "configuring", 00:11:09.217 "raid_level": "raid0", 00:11:09.217 "superblock": false, 00:11:09.217 "num_base_bdevs": 4, 00:11:09.217 "num_base_bdevs_discovered": 2, 00:11:09.217 "num_base_bdevs_operational": 4, 00:11:09.217 "base_bdevs_list": [ 00:11:09.217 { 00:11:09.217 "name": "BaseBdev1", 00:11:09.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.217 "is_configured": false, 00:11:09.217 "data_offset": 0, 00:11:09.217 "data_size": 0 00:11:09.217 }, 00:11:09.217 { 00:11:09.217 "name": null, 00:11:09.217 "uuid": "698f9473-4261-4d6d-bd21-96cd2a91e2b5", 00:11:09.217 "is_configured": false, 00:11:09.217 "data_offset": 0, 00:11:09.217 "data_size": 65536 00:11:09.217 }, 00:11:09.217 { 00:11:09.217 "name": "BaseBdev3", 00:11:09.217 "uuid": "c51a4650-8d8e-46ea-aab9-ffe4ca5d8cf1", 00:11:09.217 "is_configured": true, 00:11:09.217 "data_offset": 0, 00:11:09.217 "data_size": 65536 00:11:09.217 }, 00:11:09.217 { 00:11:09.217 "name": "BaseBdev4", 00:11:09.217 "uuid": "a471faa6-82b4-496e-860a-c1b15f2a7b97", 00:11:09.217 "is_configured": true, 00:11:09.217 "data_offset": 0, 00:11:09.217 "data_size": 65536 00:11:09.217 } 00:11:09.217 ] 00:11:09.217 }' 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.217 12:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.476 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.476 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.476 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.476 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:09.476 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.476 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:09.476 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.476 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.476 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.476 [2024-09-30 12:28:21.347434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.476 BaseBdev1 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.477 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.737 [ 00:11:09.737 { 00:11:09.737 "name": "BaseBdev1", 00:11:09.737 "aliases": [ 00:11:09.737 "e9c15edd-4c12-42e3-9b24-b1544e9e4e3d" 00:11:09.737 ], 00:11:09.737 "product_name": "Malloc disk", 00:11:09.737 "block_size": 512, 00:11:09.737 "num_blocks": 65536, 00:11:09.737 "uuid": "e9c15edd-4c12-42e3-9b24-b1544e9e4e3d", 00:11:09.737 "assigned_rate_limits": { 00:11:09.737 "rw_ios_per_sec": 0, 00:11:09.737 "rw_mbytes_per_sec": 0, 00:11:09.737 "r_mbytes_per_sec": 0, 00:11:09.737 "w_mbytes_per_sec": 0 00:11:09.737 }, 00:11:09.737 "claimed": true, 00:11:09.737 "claim_type": "exclusive_write", 00:11:09.737 "zoned": false, 00:11:09.737 "supported_io_types": { 00:11:09.737 "read": true, 00:11:09.737 "write": true, 00:11:09.737 "unmap": true, 00:11:09.737 "flush": true, 00:11:09.737 "reset": true, 00:11:09.737 "nvme_admin": false, 00:11:09.737 "nvme_io": false, 00:11:09.737 "nvme_io_md": false, 00:11:09.737 "write_zeroes": true, 00:11:09.737 "zcopy": true, 00:11:09.737 "get_zone_info": false, 00:11:09.737 "zone_management": false, 00:11:09.737 "zone_append": false, 00:11:09.737 "compare": false, 00:11:09.737 "compare_and_write": false, 00:11:09.737 "abort": true, 00:11:09.737 "seek_hole": false, 00:11:09.737 "seek_data": false, 00:11:09.737 "copy": true, 00:11:09.737 "nvme_iov_md": false 00:11:09.737 }, 00:11:09.737 "memory_domains": [ 00:11:09.737 { 00:11:09.737 "dma_device_id": "system", 00:11:09.737 "dma_device_type": 1 00:11:09.737 }, 00:11:09.737 { 00:11:09.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.737 "dma_device_type": 2 00:11:09.737 } 00:11:09.737 ], 00:11:09.737 "driver_specific": {} 00:11:09.737 } 00:11:09.737 ] 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.737 "name": "Existed_Raid", 00:11:09.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.737 "strip_size_kb": 64, 00:11:09.737 "state": "configuring", 00:11:09.737 "raid_level": "raid0", 00:11:09.737 "superblock": false, 00:11:09.737 "num_base_bdevs": 4, 00:11:09.737 "num_base_bdevs_discovered": 3, 00:11:09.737 "num_base_bdevs_operational": 4, 00:11:09.737 "base_bdevs_list": [ 00:11:09.737 { 00:11:09.737 "name": "BaseBdev1", 00:11:09.737 "uuid": "e9c15edd-4c12-42e3-9b24-b1544e9e4e3d", 00:11:09.737 "is_configured": true, 00:11:09.737 "data_offset": 0, 00:11:09.737 "data_size": 65536 00:11:09.737 }, 00:11:09.737 { 00:11:09.737 "name": null, 00:11:09.737 "uuid": "698f9473-4261-4d6d-bd21-96cd2a91e2b5", 00:11:09.737 "is_configured": false, 00:11:09.737 "data_offset": 0, 00:11:09.737 "data_size": 65536 00:11:09.737 }, 00:11:09.737 { 00:11:09.737 "name": "BaseBdev3", 00:11:09.737 "uuid": "c51a4650-8d8e-46ea-aab9-ffe4ca5d8cf1", 00:11:09.737 "is_configured": true, 00:11:09.737 "data_offset": 0, 00:11:09.737 "data_size": 65536 00:11:09.737 }, 00:11:09.737 { 00:11:09.737 "name": "BaseBdev4", 00:11:09.737 "uuid": "a471faa6-82b4-496e-860a-c1b15f2a7b97", 00:11:09.737 "is_configured": true, 00:11:09.737 "data_offset": 0, 00:11:09.737 "data_size": 65536 00:11:09.737 } 00:11:09.737 ] 00:11:09.737 }' 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.737 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.997 [2024-09-30 12:28:21.846611] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.997 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.256 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.256 "name": "Existed_Raid", 00:11:10.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.256 "strip_size_kb": 64, 00:11:10.256 "state": "configuring", 00:11:10.256 "raid_level": "raid0", 00:11:10.256 "superblock": false, 00:11:10.256 "num_base_bdevs": 4, 00:11:10.256 "num_base_bdevs_discovered": 2, 00:11:10.256 "num_base_bdevs_operational": 4, 00:11:10.256 "base_bdevs_list": [ 00:11:10.256 { 00:11:10.256 "name": "BaseBdev1", 00:11:10.256 "uuid": "e9c15edd-4c12-42e3-9b24-b1544e9e4e3d", 00:11:10.256 "is_configured": true, 00:11:10.256 "data_offset": 0, 00:11:10.256 "data_size": 65536 00:11:10.256 }, 00:11:10.256 { 00:11:10.256 "name": null, 00:11:10.256 "uuid": "698f9473-4261-4d6d-bd21-96cd2a91e2b5", 00:11:10.256 "is_configured": false, 00:11:10.256 "data_offset": 0, 00:11:10.256 "data_size": 65536 00:11:10.256 }, 00:11:10.256 { 00:11:10.256 "name": null, 00:11:10.256 "uuid": "c51a4650-8d8e-46ea-aab9-ffe4ca5d8cf1", 00:11:10.256 "is_configured": false, 00:11:10.256 "data_offset": 0, 00:11:10.256 "data_size": 65536 00:11:10.256 }, 00:11:10.256 { 00:11:10.256 "name": "BaseBdev4", 00:11:10.256 "uuid": "a471faa6-82b4-496e-860a-c1b15f2a7b97", 00:11:10.256 "is_configured": true, 00:11:10.256 "data_offset": 0, 00:11:10.256 "data_size": 65536 00:11:10.256 } 00:11:10.256 ] 00:11:10.256 }' 00:11:10.256 12:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.256 12:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.515 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.516 [2024-09-30 12:28:22.317839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.516 "name": "Existed_Raid", 00:11:10.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.516 "strip_size_kb": 64, 00:11:10.516 "state": "configuring", 00:11:10.516 "raid_level": "raid0", 00:11:10.516 "superblock": false, 00:11:10.516 "num_base_bdevs": 4, 00:11:10.516 "num_base_bdevs_discovered": 3, 00:11:10.516 "num_base_bdevs_operational": 4, 00:11:10.516 "base_bdevs_list": [ 00:11:10.516 { 00:11:10.516 "name": "BaseBdev1", 00:11:10.516 "uuid": "e9c15edd-4c12-42e3-9b24-b1544e9e4e3d", 00:11:10.516 "is_configured": true, 00:11:10.516 "data_offset": 0, 00:11:10.516 "data_size": 65536 00:11:10.516 }, 00:11:10.516 { 00:11:10.516 "name": null, 00:11:10.516 "uuid": "698f9473-4261-4d6d-bd21-96cd2a91e2b5", 00:11:10.516 "is_configured": false, 00:11:10.516 "data_offset": 0, 00:11:10.516 "data_size": 65536 00:11:10.516 }, 00:11:10.516 { 00:11:10.516 "name": "BaseBdev3", 00:11:10.516 "uuid": "c51a4650-8d8e-46ea-aab9-ffe4ca5d8cf1", 00:11:10.516 "is_configured": true, 00:11:10.516 "data_offset": 0, 00:11:10.516 "data_size": 65536 00:11:10.516 }, 00:11:10.516 { 00:11:10.516 "name": "BaseBdev4", 00:11:10.516 "uuid": "a471faa6-82b4-496e-860a-c1b15f2a7b97", 00:11:10.516 "is_configured": true, 00:11:10.516 "data_offset": 0, 00:11:10.516 "data_size": 65536 00:11:10.516 } 00:11:10.516 ] 00:11:10.516 }' 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.516 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.085 [2024-09-30 12:28:22.828951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.085 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.086 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.086 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.086 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.086 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.086 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.086 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.086 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.086 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.086 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.086 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.086 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.345 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.345 "name": "Existed_Raid", 00:11:11.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.345 "strip_size_kb": 64, 00:11:11.345 "state": "configuring", 00:11:11.345 "raid_level": "raid0", 00:11:11.345 "superblock": false, 00:11:11.345 "num_base_bdevs": 4, 00:11:11.345 "num_base_bdevs_discovered": 2, 00:11:11.345 "num_base_bdevs_operational": 4, 00:11:11.345 "base_bdevs_list": [ 00:11:11.345 { 00:11:11.345 "name": null, 00:11:11.345 "uuid": "e9c15edd-4c12-42e3-9b24-b1544e9e4e3d", 00:11:11.345 "is_configured": false, 00:11:11.345 "data_offset": 0, 00:11:11.345 "data_size": 65536 00:11:11.345 }, 00:11:11.345 { 00:11:11.345 "name": null, 00:11:11.345 "uuid": "698f9473-4261-4d6d-bd21-96cd2a91e2b5", 00:11:11.345 "is_configured": false, 00:11:11.345 "data_offset": 0, 00:11:11.345 "data_size": 65536 00:11:11.345 }, 00:11:11.345 { 00:11:11.345 "name": "BaseBdev3", 00:11:11.345 "uuid": "c51a4650-8d8e-46ea-aab9-ffe4ca5d8cf1", 00:11:11.345 "is_configured": true, 00:11:11.345 "data_offset": 0, 00:11:11.345 "data_size": 65536 00:11:11.345 }, 00:11:11.345 { 00:11:11.345 "name": "BaseBdev4", 00:11:11.345 "uuid": "a471faa6-82b4-496e-860a-c1b15f2a7b97", 00:11:11.345 "is_configured": true, 00:11:11.345 "data_offset": 0, 00:11:11.345 "data_size": 65536 00:11:11.345 } 00:11:11.345 ] 00:11:11.345 }' 00:11:11.345 12:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.345 12:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.605 [2024-09-30 12:28:23.393947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.605 "name": "Existed_Raid", 00:11:11.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.605 "strip_size_kb": 64, 00:11:11.605 "state": "configuring", 00:11:11.605 "raid_level": "raid0", 00:11:11.605 "superblock": false, 00:11:11.605 "num_base_bdevs": 4, 00:11:11.605 "num_base_bdevs_discovered": 3, 00:11:11.605 "num_base_bdevs_operational": 4, 00:11:11.605 "base_bdevs_list": [ 00:11:11.605 { 00:11:11.605 "name": null, 00:11:11.605 "uuid": "e9c15edd-4c12-42e3-9b24-b1544e9e4e3d", 00:11:11.605 "is_configured": false, 00:11:11.605 "data_offset": 0, 00:11:11.605 "data_size": 65536 00:11:11.605 }, 00:11:11.605 { 00:11:11.605 "name": "BaseBdev2", 00:11:11.605 "uuid": "698f9473-4261-4d6d-bd21-96cd2a91e2b5", 00:11:11.605 "is_configured": true, 00:11:11.605 "data_offset": 0, 00:11:11.605 "data_size": 65536 00:11:11.605 }, 00:11:11.605 { 00:11:11.605 "name": "BaseBdev3", 00:11:11.605 "uuid": "c51a4650-8d8e-46ea-aab9-ffe4ca5d8cf1", 00:11:11.605 "is_configured": true, 00:11:11.605 "data_offset": 0, 00:11:11.605 "data_size": 65536 00:11:11.605 }, 00:11:11.605 { 00:11:11.605 "name": "BaseBdev4", 00:11:11.605 "uuid": "a471faa6-82b4-496e-860a-c1b15f2a7b97", 00:11:11.605 "is_configured": true, 00:11:11.605 "data_offset": 0, 00:11:11.605 "data_size": 65536 00:11:11.605 } 00:11:11.605 ] 00:11:11.605 }' 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.605 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e9c15edd-4c12-42e3-9b24-b1544e9e4e3d 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.175 [2024-09-30 12:28:23.993418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:12.175 [2024-09-30 12:28:23.993470] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:12.175 [2024-09-30 12:28:23.993477] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:12.175 [2024-09-30 12:28:23.993772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:12.175 [2024-09-30 12:28:23.993930] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:12.175 [2024-09-30 12:28:23.993942] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:12.175 [2024-09-30 12:28:23.994219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.175 NewBaseBdev 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.175 12:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.175 [ 00:11:12.175 { 00:11:12.175 "name": "NewBaseBdev", 00:11:12.175 "aliases": [ 00:11:12.175 "e9c15edd-4c12-42e3-9b24-b1544e9e4e3d" 00:11:12.175 ], 00:11:12.175 "product_name": "Malloc disk", 00:11:12.175 "block_size": 512, 00:11:12.175 "num_blocks": 65536, 00:11:12.175 "uuid": "e9c15edd-4c12-42e3-9b24-b1544e9e4e3d", 00:11:12.175 "assigned_rate_limits": { 00:11:12.175 "rw_ios_per_sec": 0, 00:11:12.175 "rw_mbytes_per_sec": 0, 00:11:12.175 "r_mbytes_per_sec": 0, 00:11:12.175 "w_mbytes_per_sec": 0 00:11:12.175 }, 00:11:12.175 "claimed": true, 00:11:12.175 "claim_type": "exclusive_write", 00:11:12.175 "zoned": false, 00:11:12.175 "supported_io_types": { 00:11:12.175 "read": true, 00:11:12.175 "write": true, 00:11:12.175 "unmap": true, 00:11:12.175 "flush": true, 00:11:12.175 "reset": true, 00:11:12.175 "nvme_admin": false, 00:11:12.175 "nvme_io": false, 00:11:12.175 "nvme_io_md": false, 00:11:12.175 "write_zeroes": true, 00:11:12.175 "zcopy": true, 00:11:12.175 "get_zone_info": false, 00:11:12.175 "zone_management": false, 00:11:12.175 "zone_append": false, 00:11:12.175 "compare": false, 00:11:12.175 "compare_and_write": false, 00:11:12.175 "abort": true, 00:11:12.175 "seek_hole": false, 00:11:12.175 "seek_data": false, 00:11:12.175 "copy": true, 00:11:12.175 "nvme_iov_md": false 00:11:12.175 }, 00:11:12.175 "memory_domains": [ 00:11:12.175 { 00:11:12.175 "dma_device_id": "system", 00:11:12.175 "dma_device_type": 1 00:11:12.175 }, 00:11:12.175 { 00:11:12.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.175 "dma_device_type": 2 00:11:12.175 } 00:11:12.175 ], 00:11:12.175 "driver_specific": {} 00:11:12.175 } 00:11:12.175 ] 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.175 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.435 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.435 "name": "Existed_Raid", 00:11:12.435 "uuid": "ed35f5a7-bb00-4262-b14d-468f5d0f3daa", 00:11:12.435 "strip_size_kb": 64, 00:11:12.435 "state": "online", 00:11:12.435 "raid_level": "raid0", 00:11:12.435 "superblock": false, 00:11:12.435 "num_base_bdevs": 4, 00:11:12.435 "num_base_bdevs_discovered": 4, 00:11:12.435 "num_base_bdevs_operational": 4, 00:11:12.435 "base_bdevs_list": [ 00:11:12.435 { 00:11:12.435 "name": "NewBaseBdev", 00:11:12.435 "uuid": "e9c15edd-4c12-42e3-9b24-b1544e9e4e3d", 00:11:12.435 "is_configured": true, 00:11:12.435 "data_offset": 0, 00:11:12.435 "data_size": 65536 00:11:12.435 }, 00:11:12.435 { 00:11:12.435 "name": "BaseBdev2", 00:11:12.435 "uuid": "698f9473-4261-4d6d-bd21-96cd2a91e2b5", 00:11:12.435 "is_configured": true, 00:11:12.435 "data_offset": 0, 00:11:12.435 "data_size": 65536 00:11:12.435 }, 00:11:12.435 { 00:11:12.435 "name": "BaseBdev3", 00:11:12.435 "uuid": "c51a4650-8d8e-46ea-aab9-ffe4ca5d8cf1", 00:11:12.435 "is_configured": true, 00:11:12.435 "data_offset": 0, 00:11:12.435 "data_size": 65536 00:11:12.435 }, 00:11:12.435 { 00:11:12.435 "name": "BaseBdev4", 00:11:12.435 "uuid": "a471faa6-82b4-496e-860a-c1b15f2a7b97", 00:11:12.435 "is_configured": true, 00:11:12.435 "data_offset": 0, 00:11:12.435 "data_size": 65536 00:11:12.435 } 00:11:12.435 ] 00:11:12.435 }' 00:11:12.436 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.436 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.695 [2024-09-30 12:28:24.437015] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.695 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:12.695 "name": "Existed_Raid", 00:11:12.695 "aliases": [ 00:11:12.695 "ed35f5a7-bb00-4262-b14d-468f5d0f3daa" 00:11:12.695 ], 00:11:12.695 "product_name": "Raid Volume", 00:11:12.695 "block_size": 512, 00:11:12.695 "num_blocks": 262144, 00:11:12.695 "uuid": "ed35f5a7-bb00-4262-b14d-468f5d0f3daa", 00:11:12.695 "assigned_rate_limits": { 00:11:12.695 "rw_ios_per_sec": 0, 00:11:12.695 "rw_mbytes_per_sec": 0, 00:11:12.695 "r_mbytes_per_sec": 0, 00:11:12.695 "w_mbytes_per_sec": 0 00:11:12.695 }, 00:11:12.695 "claimed": false, 00:11:12.695 "zoned": false, 00:11:12.695 "supported_io_types": { 00:11:12.695 "read": true, 00:11:12.695 "write": true, 00:11:12.695 "unmap": true, 00:11:12.695 "flush": true, 00:11:12.695 "reset": true, 00:11:12.695 "nvme_admin": false, 00:11:12.695 "nvme_io": false, 00:11:12.695 "nvme_io_md": false, 00:11:12.695 "write_zeroes": true, 00:11:12.695 "zcopy": false, 00:11:12.695 "get_zone_info": false, 00:11:12.695 "zone_management": false, 00:11:12.695 "zone_append": false, 00:11:12.695 "compare": false, 00:11:12.695 "compare_and_write": false, 00:11:12.695 "abort": false, 00:11:12.695 "seek_hole": false, 00:11:12.696 "seek_data": false, 00:11:12.696 "copy": false, 00:11:12.696 "nvme_iov_md": false 00:11:12.696 }, 00:11:12.696 "memory_domains": [ 00:11:12.696 { 00:11:12.696 "dma_device_id": "system", 00:11:12.696 "dma_device_type": 1 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.696 "dma_device_type": 2 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "dma_device_id": "system", 00:11:12.696 "dma_device_type": 1 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.696 "dma_device_type": 2 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "dma_device_id": "system", 00:11:12.696 "dma_device_type": 1 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.696 "dma_device_type": 2 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "dma_device_id": "system", 00:11:12.696 "dma_device_type": 1 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.696 "dma_device_type": 2 00:11:12.696 } 00:11:12.696 ], 00:11:12.696 "driver_specific": { 00:11:12.696 "raid": { 00:11:12.696 "uuid": "ed35f5a7-bb00-4262-b14d-468f5d0f3daa", 00:11:12.696 "strip_size_kb": 64, 00:11:12.696 "state": "online", 00:11:12.696 "raid_level": "raid0", 00:11:12.696 "superblock": false, 00:11:12.696 "num_base_bdevs": 4, 00:11:12.696 "num_base_bdevs_discovered": 4, 00:11:12.696 "num_base_bdevs_operational": 4, 00:11:12.696 "base_bdevs_list": [ 00:11:12.696 { 00:11:12.696 "name": "NewBaseBdev", 00:11:12.696 "uuid": "e9c15edd-4c12-42e3-9b24-b1544e9e4e3d", 00:11:12.696 "is_configured": true, 00:11:12.696 "data_offset": 0, 00:11:12.696 "data_size": 65536 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "name": "BaseBdev2", 00:11:12.696 "uuid": "698f9473-4261-4d6d-bd21-96cd2a91e2b5", 00:11:12.696 "is_configured": true, 00:11:12.696 "data_offset": 0, 00:11:12.696 "data_size": 65536 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "name": "BaseBdev3", 00:11:12.696 "uuid": "c51a4650-8d8e-46ea-aab9-ffe4ca5d8cf1", 00:11:12.696 "is_configured": true, 00:11:12.696 "data_offset": 0, 00:11:12.696 "data_size": 65536 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "name": "BaseBdev4", 00:11:12.696 "uuid": "a471faa6-82b4-496e-860a-c1b15f2a7b97", 00:11:12.696 "is_configured": true, 00:11:12.696 "data_offset": 0, 00:11:12.696 "data_size": 65536 00:11:12.696 } 00:11:12.696 ] 00:11:12.696 } 00:11:12.696 } 00:11:12.696 }' 00:11:12.696 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.696 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:12.696 BaseBdev2 00:11:12.696 BaseBdev3 00:11:12.696 BaseBdev4' 00:11:12.696 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.696 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:12.696 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.696 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.696 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:12.696 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.696 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.956 [2024-09-30 12:28:24.776066] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.956 [2024-09-30 12:28:24.776137] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.956 [2024-09-30 12:28:24.776227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.956 [2024-09-30 12:28:24.776310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.956 [2024-09-30 12:28:24.776344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69254 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69254 ']' 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69254 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69254 00:11:12.956 killing process with pid 69254 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69254' 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69254 00:11:12.956 [2024-09-30 12:28:24.824101] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.956 12:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69254 00:11:13.525 [2024-09-30 12:28:25.236879] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:14.907 00:11:14.907 real 0m11.657s 00:11:14.907 user 0m18.194s 00:11:14.907 sys 0m2.129s 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.907 ************************************ 00:11:14.907 END TEST raid_state_function_test 00:11:14.907 ************************************ 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.907 12:28:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:14.907 12:28:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:14.907 12:28:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.907 12:28:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.907 ************************************ 00:11:14.907 START TEST raid_state_function_test_sb 00:11:14.907 ************************************ 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:14.907 Process raid pid: 69925 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69925 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69925' 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69925 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 69925 ']' 00:11:14.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.907 12:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.907 [2024-09-30 12:28:26.733184] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:14.907 [2024-09-30 12:28:26.733378] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.167 [2024-09-30 12:28:26.903274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.426 [2024-09-30 12:28:27.149685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.686 [2024-09-30 12:28:27.387086] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.686 [2024-09-30 12:28:27.387225] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.686 [2024-09-30 12:28:27.562719] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.686 [2024-09-30 12:28:27.562866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.686 [2024-09-30 12:28:27.562881] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.686 [2024-09-30 12:28:27.562892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.686 [2024-09-30 12:28:27.562898] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.686 [2024-09-30 12:28:27.562910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.686 [2024-09-30 12:28:27.562916] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.686 [2024-09-30 12:28:27.562925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.686 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.687 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.687 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.687 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.687 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.687 12:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.687 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.687 12:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.946 12:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.946 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.946 "name": "Existed_Raid", 00:11:15.946 "uuid": "72ab43fa-f142-4153-a228-77c746084b8c", 00:11:15.946 "strip_size_kb": 64, 00:11:15.946 "state": "configuring", 00:11:15.946 "raid_level": "raid0", 00:11:15.946 "superblock": true, 00:11:15.946 "num_base_bdevs": 4, 00:11:15.946 "num_base_bdevs_discovered": 0, 00:11:15.946 "num_base_bdevs_operational": 4, 00:11:15.946 "base_bdevs_list": [ 00:11:15.946 { 00:11:15.946 "name": "BaseBdev1", 00:11:15.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.946 "is_configured": false, 00:11:15.946 "data_offset": 0, 00:11:15.946 "data_size": 0 00:11:15.946 }, 00:11:15.946 { 00:11:15.946 "name": "BaseBdev2", 00:11:15.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.946 "is_configured": false, 00:11:15.946 "data_offset": 0, 00:11:15.946 "data_size": 0 00:11:15.946 }, 00:11:15.946 { 00:11:15.946 "name": "BaseBdev3", 00:11:15.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.946 "is_configured": false, 00:11:15.946 "data_offset": 0, 00:11:15.946 "data_size": 0 00:11:15.946 }, 00:11:15.946 { 00:11:15.946 "name": "BaseBdev4", 00:11:15.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.946 "is_configured": false, 00:11:15.946 "data_offset": 0, 00:11:15.946 "data_size": 0 00:11:15.946 } 00:11:15.946 ] 00:11:15.946 }' 00:11:15.946 12:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.946 12:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.205 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:16.205 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.205 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.205 [2024-09-30 12:28:28.021810] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.205 [2024-09-30 12:28:28.021896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:16.205 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.205 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.205 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.205 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.205 [2024-09-30 12:28:28.033825] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.205 [2024-09-30 12:28:28.033866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.205 [2024-09-30 12:28:28.033874] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.205 [2024-09-30 12:28:28.033884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.205 [2024-09-30 12:28:28.033890] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.205 [2024-09-30 12:28:28.033899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.205 [2024-09-30 12:28:28.033905] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:16.205 [2024-09-30 12:28:28.033914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:16.205 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.205 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:16.205 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.205 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 [2024-09-30 12:28:28.117168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.465 BaseBdev1 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 [ 00:11:16.465 { 00:11:16.465 "name": "BaseBdev1", 00:11:16.465 "aliases": [ 00:11:16.465 "4f6c93c2-2d15-410f-ae88-6c60cb352922" 00:11:16.465 ], 00:11:16.465 "product_name": "Malloc disk", 00:11:16.465 "block_size": 512, 00:11:16.465 "num_blocks": 65536, 00:11:16.465 "uuid": "4f6c93c2-2d15-410f-ae88-6c60cb352922", 00:11:16.465 "assigned_rate_limits": { 00:11:16.465 "rw_ios_per_sec": 0, 00:11:16.465 "rw_mbytes_per_sec": 0, 00:11:16.465 "r_mbytes_per_sec": 0, 00:11:16.465 "w_mbytes_per_sec": 0 00:11:16.465 }, 00:11:16.465 "claimed": true, 00:11:16.465 "claim_type": "exclusive_write", 00:11:16.465 "zoned": false, 00:11:16.465 "supported_io_types": { 00:11:16.465 "read": true, 00:11:16.465 "write": true, 00:11:16.465 "unmap": true, 00:11:16.465 "flush": true, 00:11:16.465 "reset": true, 00:11:16.465 "nvme_admin": false, 00:11:16.465 "nvme_io": false, 00:11:16.465 "nvme_io_md": false, 00:11:16.465 "write_zeroes": true, 00:11:16.465 "zcopy": true, 00:11:16.465 "get_zone_info": false, 00:11:16.465 "zone_management": false, 00:11:16.465 "zone_append": false, 00:11:16.465 "compare": false, 00:11:16.465 "compare_and_write": false, 00:11:16.465 "abort": true, 00:11:16.465 "seek_hole": false, 00:11:16.465 "seek_data": false, 00:11:16.465 "copy": true, 00:11:16.465 "nvme_iov_md": false 00:11:16.465 }, 00:11:16.465 "memory_domains": [ 00:11:16.465 { 00:11:16.465 "dma_device_id": "system", 00:11:16.465 "dma_device_type": 1 00:11:16.465 }, 00:11:16.465 { 00:11:16.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.465 "dma_device_type": 2 00:11:16.465 } 00:11:16.465 ], 00:11:16.465 "driver_specific": {} 00:11:16.465 } 00:11:16.465 ] 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.465 "name": "Existed_Raid", 00:11:16.465 "uuid": "cf65809d-e1e6-483d-81a2-2e6a39cafb38", 00:11:16.465 "strip_size_kb": 64, 00:11:16.465 "state": "configuring", 00:11:16.465 "raid_level": "raid0", 00:11:16.465 "superblock": true, 00:11:16.465 "num_base_bdevs": 4, 00:11:16.465 "num_base_bdevs_discovered": 1, 00:11:16.465 "num_base_bdevs_operational": 4, 00:11:16.465 "base_bdevs_list": [ 00:11:16.465 { 00:11:16.465 "name": "BaseBdev1", 00:11:16.465 "uuid": "4f6c93c2-2d15-410f-ae88-6c60cb352922", 00:11:16.465 "is_configured": true, 00:11:16.465 "data_offset": 2048, 00:11:16.465 "data_size": 63488 00:11:16.465 }, 00:11:16.465 { 00:11:16.465 "name": "BaseBdev2", 00:11:16.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.465 "is_configured": false, 00:11:16.465 "data_offset": 0, 00:11:16.465 "data_size": 0 00:11:16.465 }, 00:11:16.465 { 00:11:16.465 "name": "BaseBdev3", 00:11:16.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.465 "is_configured": false, 00:11:16.465 "data_offset": 0, 00:11:16.465 "data_size": 0 00:11:16.465 }, 00:11:16.465 { 00:11:16.465 "name": "BaseBdev4", 00:11:16.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.465 "is_configured": false, 00:11:16.465 "data_offset": 0, 00:11:16.465 "data_size": 0 00:11:16.465 } 00:11:16.465 ] 00:11:16.465 }' 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.465 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.725 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:16.725 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.725 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.725 [2024-09-30 12:28:28.600364] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.725 [2024-09-30 12:28:28.600418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:16.725 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.725 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.725 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.725 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.725 [2024-09-30 12:28:28.608407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.725 [2024-09-30 12:28:28.610423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.725 [2024-09-30 12:28:28.610499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.725 [2024-09-30 12:28:28.610526] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.725 [2024-09-30 12:28:28.610551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.725 [2024-09-30 12:28:28.610568] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:16.725 [2024-09-30 12:28:28.610587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.726 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.985 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.985 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.985 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.985 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.985 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.985 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.985 "name": "Existed_Raid", 00:11:16.985 "uuid": "2f4ba218-b6c6-4866-b1d4-a5c2fe439561", 00:11:16.985 "strip_size_kb": 64, 00:11:16.985 "state": "configuring", 00:11:16.985 "raid_level": "raid0", 00:11:16.985 "superblock": true, 00:11:16.985 "num_base_bdevs": 4, 00:11:16.985 "num_base_bdevs_discovered": 1, 00:11:16.985 "num_base_bdevs_operational": 4, 00:11:16.985 "base_bdevs_list": [ 00:11:16.985 { 00:11:16.985 "name": "BaseBdev1", 00:11:16.985 "uuid": "4f6c93c2-2d15-410f-ae88-6c60cb352922", 00:11:16.985 "is_configured": true, 00:11:16.985 "data_offset": 2048, 00:11:16.985 "data_size": 63488 00:11:16.985 }, 00:11:16.985 { 00:11:16.985 "name": "BaseBdev2", 00:11:16.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.985 "is_configured": false, 00:11:16.985 "data_offset": 0, 00:11:16.985 "data_size": 0 00:11:16.985 }, 00:11:16.985 { 00:11:16.985 "name": "BaseBdev3", 00:11:16.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.985 "is_configured": false, 00:11:16.985 "data_offset": 0, 00:11:16.985 "data_size": 0 00:11:16.985 }, 00:11:16.985 { 00:11:16.985 "name": "BaseBdev4", 00:11:16.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.985 "is_configured": false, 00:11:16.985 "data_offset": 0, 00:11:16.985 "data_size": 0 00:11:16.985 } 00:11:16.985 ] 00:11:16.985 }' 00:11:16.985 12:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.985 12:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.245 [2024-09-30 12:28:29.100137] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.245 BaseBdev2 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.245 [ 00:11:17.245 { 00:11:17.245 "name": "BaseBdev2", 00:11:17.245 "aliases": [ 00:11:17.245 "2bed92f1-909e-40b7-977a-31b8be3cf944" 00:11:17.245 ], 00:11:17.245 "product_name": "Malloc disk", 00:11:17.245 "block_size": 512, 00:11:17.245 "num_blocks": 65536, 00:11:17.245 "uuid": "2bed92f1-909e-40b7-977a-31b8be3cf944", 00:11:17.245 "assigned_rate_limits": { 00:11:17.245 "rw_ios_per_sec": 0, 00:11:17.245 "rw_mbytes_per_sec": 0, 00:11:17.245 "r_mbytes_per_sec": 0, 00:11:17.245 "w_mbytes_per_sec": 0 00:11:17.245 }, 00:11:17.245 "claimed": true, 00:11:17.245 "claim_type": "exclusive_write", 00:11:17.245 "zoned": false, 00:11:17.245 "supported_io_types": { 00:11:17.245 "read": true, 00:11:17.245 "write": true, 00:11:17.245 "unmap": true, 00:11:17.245 "flush": true, 00:11:17.245 "reset": true, 00:11:17.245 "nvme_admin": false, 00:11:17.245 "nvme_io": false, 00:11:17.245 "nvme_io_md": false, 00:11:17.245 "write_zeroes": true, 00:11:17.245 "zcopy": true, 00:11:17.245 "get_zone_info": false, 00:11:17.245 "zone_management": false, 00:11:17.245 "zone_append": false, 00:11:17.245 "compare": false, 00:11:17.245 "compare_and_write": false, 00:11:17.245 "abort": true, 00:11:17.245 "seek_hole": false, 00:11:17.245 "seek_data": false, 00:11:17.245 "copy": true, 00:11:17.245 "nvme_iov_md": false 00:11:17.245 }, 00:11:17.245 "memory_domains": [ 00:11:17.245 { 00:11:17.245 "dma_device_id": "system", 00:11:17.245 "dma_device_type": 1 00:11:17.245 }, 00:11:17.245 { 00:11:17.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.245 "dma_device_type": 2 00:11:17.245 } 00:11:17.245 ], 00:11:17.245 "driver_specific": {} 00:11:17.245 } 00:11:17.245 ] 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.245 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.507 "name": "Existed_Raid", 00:11:17.507 "uuid": "2f4ba218-b6c6-4866-b1d4-a5c2fe439561", 00:11:17.507 "strip_size_kb": 64, 00:11:17.507 "state": "configuring", 00:11:17.507 "raid_level": "raid0", 00:11:17.507 "superblock": true, 00:11:17.507 "num_base_bdevs": 4, 00:11:17.507 "num_base_bdevs_discovered": 2, 00:11:17.507 "num_base_bdevs_operational": 4, 00:11:17.507 "base_bdevs_list": [ 00:11:17.507 { 00:11:17.507 "name": "BaseBdev1", 00:11:17.507 "uuid": "4f6c93c2-2d15-410f-ae88-6c60cb352922", 00:11:17.507 "is_configured": true, 00:11:17.507 "data_offset": 2048, 00:11:17.507 "data_size": 63488 00:11:17.507 }, 00:11:17.507 { 00:11:17.507 "name": "BaseBdev2", 00:11:17.507 "uuid": "2bed92f1-909e-40b7-977a-31b8be3cf944", 00:11:17.507 "is_configured": true, 00:11:17.507 "data_offset": 2048, 00:11:17.507 "data_size": 63488 00:11:17.507 }, 00:11:17.507 { 00:11:17.507 "name": "BaseBdev3", 00:11:17.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.507 "is_configured": false, 00:11:17.507 "data_offset": 0, 00:11:17.507 "data_size": 0 00:11:17.507 }, 00:11:17.507 { 00:11:17.507 "name": "BaseBdev4", 00:11:17.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.507 "is_configured": false, 00:11:17.507 "data_offset": 0, 00:11:17.507 "data_size": 0 00:11:17.507 } 00:11:17.507 ] 00:11:17.507 }' 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.507 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.767 [2024-09-30 12:28:29.583069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.767 BaseBdev3 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.767 [ 00:11:17.767 { 00:11:17.767 "name": "BaseBdev3", 00:11:17.767 "aliases": [ 00:11:17.767 "6a6dad04-4fb2-443b-81b8-11a047281cfc" 00:11:17.767 ], 00:11:17.767 "product_name": "Malloc disk", 00:11:17.767 "block_size": 512, 00:11:17.767 "num_blocks": 65536, 00:11:17.767 "uuid": "6a6dad04-4fb2-443b-81b8-11a047281cfc", 00:11:17.767 "assigned_rate_limits": { 00:11:17.767 "rw_ios_per_sec": 0, 00:11:17.767 "rw_mbytes_per_sec": 0, 00:11:17.767 "r_mbytes_per_sec": 0, 00:11:17.767 "w_mbytes_per_sec": 0 00:11:17.767 }, 00:11:17.767 "claimed": true, 00:11:17.767 "claim_type": "exclusive_write", 00:11:17.767 "zoned": false, 00:11:17.767 "supported_io_types": { 00:11:17.767 "read": true, 00:11:17.767 "write": true, 00:11:17.767 "unmap": true, 00:11:17.767 "flush": true, 00:11:17.767 "reset": true, 00:11:17.767 "nvme_admin": false, 00:11:17.767 "nvme_io": false, 00:11:17.767 "nvme_io_md": false, 00:11:17.767 "write_zeroes": true, 00:11:17.767 "zcopy": true, 00:11:17.767 "get_zone_info": false, 00:11:17.767 "zone_management": false, 00:11:17.767 "zone_append": false, 00:11:17.767 "compare": false, 00:11:17.767 "compare_and_write": false, 00:11:17.767 "abort": true, 00:11:17.767 "seek_hole": false, 00:11:17.767 "seek_data": false, 00:11:17.767 "copy": true, 00:11:17.767 "nvme_iov_md": false 00:11:17.767 }, 00:11:17.767 "memory_domains": [ 00:11:17.767 { 00:11:17.767 "dma_device_id": "system", 00:11:17.767 "dma_device_type": 1 00:11:17.767 }, 00:11:17.767 { 00:11:17.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.767 "dma_device_type": 2 00:11:17.767 } 00:11:17.767 ], 00:11:17.767 "driver_specific": {} 00:11:17.767 } 00:11:17.767 ] 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.767 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.027 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.027 "name": "Existed_Raid", 00:11:18.027 "uuid": "2f4ba218-b6c6-4866-b1d4-a5c2fe439561", 00:11:18.027 "strip_size_kb": 64, 00:11:18.027 "state": "configuring", 00:11:18.027 "raid_level": "raid0", 00:11:18.027 "superblock": true, 00:11:18.027 "num_base_bdevs": 4, 00:11:18.027 "num_base_bdevs_discovered": 3, 00:11:18.027 "num_base_bdevs_operational": 4, 00:11:18.027 "base_bdevs_list": [ 00:11:18.027 { 00:11:18.027 "name": "BaseBdev1", 00:11:18.027 "uuid": "4f6c93c2-2d15-410f-ae88-6c60cb352922", 00:11:18.027 "is_configured": true, 00:11:18.027 "data_offset": 2048, 00:11:18.027 "data_size": 63488 00:11:18.027 }, 00:11:18.027 { 00:11:18.027 "name": "BaseBdev2", 00:11:18.027 "uuid": "2bed92f1-909e-40b7-977a-31b8be3cf944", 00:11:18.027 "is_configured": true, 00:11:18.027 "data_offset": 2048, 00:11:18.027 "data_size": 63488 00:11:18.027 }, 00:11:18.027 { 00:11:18.027 "name": "BaseBdev3", 00:11:18.027 "uuid": "6a6dad04-4fb2-443b-81b8-11a047281cfc", 00:11:18.027 "is_configured": true, 00:11:18.027 "data_offset": 2048, 00:11:18.027 "data_size": 63488 00:11:18.027 }, 00:11:18.027 { 00:11:18.027 "name": "BaseBdev4", 00:11:18.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.027 "is_configured": false, 00:11:18.027 "data_offset": 0, 00:11:18.027 "data_size": 0 00:11:18.027 } 00:11:18.027 ] 00:11:18.027 }' 00:11:18.027 12:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.027 12:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.287 [2024-09-30 12:28:30.085287] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.287 [2024-09-30 12:28:30.085563] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:18.287 [2024-09-30 12:28:30.085578] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:18.287 [2024-09-30 12:28:30.085929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:18.287 BaseBdev4 00:11:18.287 [2024-09-30 12:28:30.086097] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:18.287 [2024-09-30 12:28:30.086115] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:18.287 [2024-09-30 12:28:30.086263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.287 [ 00:11:18.287 { 00:11:18.287 "name": "BaseBdev4", 00:11:18.287 "aliases": [ 00:11:18.287 "31825875-ab21-4135-975b-95152514352a" 00:11:18.287 ], 00:11:18.287 "product_name": "Malloc disk", 00:11:18.287 "block_size": 512, 00:11:18.287 "num_blocks": 65536, 00:11:18.287 "uuid": "31825875-ab21-4135-975b-95152514352a", 00:11:18.287 "assigned_rate_limits": { 00:11:18.287 "rw_ios_per_sec": 0, 00:11:18.287 "rw_mbytes_per_sec": 0, 00:11:18.287 "r_mbytes_per_sec": 0, 00:11:18.287 "w_mbytes_per_sec": 0 00:11:18.287 }, 00:11:18.287 "claimed": true, 00:11:18.287 "claim_type": "exclusive_write", 00:11:18.287 "zoned": false, 00:11:18.287 "supported_io_types": { 00:11:18.287 "read": true, 00:11:18.287 "write": true, 00:11:18.287 "unmap": true, 00:11:18.287 "flush": true, 00:11:18.287 "reset": true, 00:11:18.287 "nvme_admin": false, 00:11:18.287 "nvme_io": false, 00:11:18.287 "nvme_io_md": false, 00:11:18.287 "write_zeroes": true, 00:11:18.287 "zcopy": true, 00:11:18.287 "get_zone_info": false, 00:11:18.287 "zone_management": false, 00:11:18.287 "zone_append": false, 00:11:18.287 "compare": false, 00:11:18.287 "compare_and_write": false, 00:11:18.287 "abort": true, 00:11:18.287 "seek_hole": false, 00:11:18.287 "seek_data": false, 00:11:18.287 "copy": true, 00:11:18.287 "nvme_iov_md": false 00:11:18.287 }, 00:11:18.287 "memory_domains": [ 00:11:18.287 { 00:11:18.287 "dma_device_id": "system", 00:11:18.287 "dma_device_type": 1 00:11:18.287 }, 00:11:18.287 { 00:11:18.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.287 "dma_device_type": 2 00:11:18.287 } 00:11:18.287 ], 00:11:18.287 "driver_specific": {} 00:11:18.287 } 00:11:18.287 ] 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.287 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.288 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.288 "name": "Existed_Raid", 00:11:18.288 "uuid": "2f4ba218-b6c6-4866-b1d4-a5c2fe439561", 00:11:18.288 "strip_size_kb": 64, 00:11:18.288 "state": "online", 00:11:18.288 "raid_level": "raid0", 00:11:18.288 "superblock": true, 00:11:18.288 "num_base_bdevs": 4, 00:11:18.288 "num_base_bdevs_discovered": 4, 00:11:18.288 "num_base_bdevs_operational": 4, 00:11:18.288 "base_bdevs_list": [ 00:11:18.288 { 00:11:18.288 "name": "BaseBdev1", 00:11:18.288 "uuid": "4f6c93c2-2d15-410f-ae88-6c60cb352922", 00:11:18.288 "is_configured": true, 00:11:18.288 "data_offset": 2048, 00:11:18.288 "data_size": 63488 00:11:18.288 }, 00:11:18.288 { 00:11:18.288 "name": "BaseBdev2", 00:11:18.288 "uuid": "2bed92f1-909e-40b7-977a-31b8be3cf944", 00:11:18.288 "is_configured": true, 00:11:18.288 "data_offset": 2048, 00:11:18.288 "data_size": 63488 00:11:18.288 }, 00:11:18.288 { 00:11:18.288 "name": "BaseBdev3", 00:11:18.288 "uuid": "6a6dad04-4fb2-443b-81b8-11a047281cfc", 00:11:18.288 "is_configured": true, 00:11:18.288 "data_offset": 2048, 00:11:18.288 "data_size": 63488 00:11:18.288 }, 00:11:18.288 { 00:11:18.288 "name": "BaseBdev4", 00:11:18.288 "uuid": "31825875-ab21-4135-975b-95152514352a", 00:11:18.288 "is_configured": true, 00:11:18.288 "data_offset": 2048, 00:11:18.288 "data_size": 63488 00:11:18.288 } 00:11:18.288 ] 00:11:18.288 }' 00:11:18.288 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.288 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.856 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.856 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.856 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.857 [2024-09-30 12:28:30.580836] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.857 "name": "Existed_Raid", 00:11:18.857 "aliases": [ 00:11:18.857 "2f4ba218-b6c6-4866-b1d4-a5c2fe439561" 00:11:18.857 ], 00:11:18.857 "product_name": "Raid Volume", 00:11:18.857 "block_size": 512, 00:11:18.857 "num_blocks": 253952, 00:11:18.857 "uuid": "2f4ba218-b6c6-4866-b1d4-a5c2fe439561", 00:11:18.857 "assigned_rate_limits": { 00:11:18.857 "rw_ios_per_sec": 0, 00:11:18.857 "rw_mbytes_per_sec": 0, 00:11:18.857 "r_mbytes_per_sec": 0, 00:11:18.857 "w_mbytes_per_sec": 0 00:11:18.857 }, 00:11:18.857 "claimed": false, 00:11:18.857 "zoned": false, 00:11:18.857 "supported_io_types": { 00:11:18.857 "read": true, 00:11:18.857 "write": true, 00:11:18.857 "unmap": true, 00:11:18.857 "flush": true, 00:11:18.857 "reset": true, 00:11:18.857 "nvme_admin": false, 00:11:18.857 "nvme_io": false, 00:11:18.857 "nvme_io_md": false, 00:11:18.857 "write_zeroes": true, 00:11:18.857 "zcopy": false, 00:11:18.857 "get_zone_info": false, 00:11:18.857 "zone_management": false, 00:11:18.857 "zone_append": false, 00:11:18.857 "compare": false, 00:11:18.857 "compare_and_write": false, 00:11:18.857 "abort": false, 00:11:18.857 "seek_hole": false, 00:11:18.857 "seek_data": false, 00:11:18.857 "copy": false, 00:11:18.857 "nvme_iov_md": false 00:11:18.857 }, 00:11:18.857 "memory_domains": [ 00:11:18.857 { 00:11:18.857 "dma_device_id": "system", 00:11:18.857 "dma_device_type": 1 00:11:18.857 }, 00:11:18.857 { 00:11:18.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.857 "dma_device_type": 2 00:11:18.857 }, 00:11:18.857 { 00:11:18.857 "dma_device_id": "system", 00:11:18.857 "dma_device_type": 1 00:11:18.857 }, 00:11:18.857 { 00:11:18.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.857 "dma_device_type": 2 00:11:18.857 }, 00:11:18.857 { 00:11:18.857 "dma_device_id": "system", 00:11:18.857 "dma_device_type": 1 00:11:18.857 }, 00:11:18.857 { 00:11:18.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.857 "dma_device_type": 2 00:11:18.857 }, 00:11:18.857 { 00:11:18.857 "dma_device_id": "system", 00:11:18.857 "dma_device_type": 1 00:11:18.857 }, 00:11:18.857 { 00:11:18.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.857 "dma_device_type": 2 00:11:18.857 } 00:11:18.857 ], 00:11:18.857 "driver_specific": { 00:11:18.857 "raid": { 00:11:18.857 "uuid": "2f4ba218-b6c6-4866-b1d4-a5c2fe439561", 00:11:18.857 "strip_size_kb": 64, 00:11:18.857 "state": "online", 00:11:18.857 "raid_level": "raid0", 00:11:18.857 "superblock": true, 00:11:18.857 "num_base_bdevs": 4, 00:11:18.857 "num_base_bdevs_discovered": 4, 00:11:18.857 "num_base_bdevs_operational": 4, 00:11:18.857 "base_bdevs_list": [ 00:11:18.857 { 00:11:18.857 "name": "BaseBdev1", 00:11:18.857 "uuid": "4f6c93c2-2d15-410f-ae88-6c60cb352922", 00:11:18.857 "is_configured": true, 00:11:18.857 "data_offset": 2048, 00:11:18.857 "data_size": 63488 00:11:18.857 }, 00:11:18.857 { 00:11:18.857 "name": "BaseBdev2", 00:11:18.857 "uuid": "2bed92f1-909e-40b7-977a-31b8be3cf944", 00:11:18.857 "is_configured": true, 00:11:18.857 "data_offset": 2048, 00:11:18.857 "data_size": 63488 00:11:18.857 }, 00:11:18.857 { 00:11:18.857 "name": "BaseBdev3", 00:11:18.857 "uuid": "6a6dad04-4fb2-443b-81b8-11a047281cfc", 00:11:18.857 "is_configured": true, 00:11:18.857 "data_offset": 2048, 00:11:18.857 "data_size": 63488 00:11:18.857 }, 00:11:18.857 { 00:11:18.857 "name": "BaseBdev4", 00:11:18.857 "uuid": "31825875-ab21-4135-975b-95152514352a", 00:11:18.857 "is_configured": true, 00:11:18.857 "data_offset": 2048, 00:11:18.857 "data_size": 63488 00:11:18.857 } 00:11:18.857 ] 00:11:18.857 } 00:11:18.857 } 00:11:18.857 }' 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:18.857 BaseBdev2 00:11:18.857 BaseBdev3 00:11:18.857 BaseBdev4' 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.857 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.117 [2024-09-30 12:28:30.868053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.117 [2024-09-30 12:28:30.868129] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.117 [2024-09-30 12:28:30.868206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.117 12:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.377 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.377 "name": "Existed_Raid", 00:11:19.377 "uuid": "2f4ba218-b6c6-4866-b1d4-a5c2fe439561", 00:11:19.377 "strip_size_kb": 64, 00:11:19.377 "state": "offline", 00:11:19.377 "raid_level": "raid0", 00:11:19.377 "superblock": true, 00:11:19.377 "num_base_bdevs": 4, 00:11:19.377 "num_base_bdevs_discovered": 3, 00:11:19.377 "num_base_bdevs_operational": 3, 00:11:19.377 "base_bdevs_list": [ 00:11:19.377 { 00:11:19.377 "name": null, 00:11:19.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.377 "is_configured": false, 00:11:19.377 "data_offset": 0, 00:11:19.377 "data_size": 63488 00:11:19.377 }, 00:11:19.377 { 00:11:19.377 "name": "BaseBdev2", 00:11:19.377 "uuid": "2bed92f1-909e-40b7-977a-31b8be3cf944", 00:11:19.377 "is_configured": true, 00:11:19.377 "data_offset": 2048, 00:11:19.377 "data_size": 63488 00:11:19.377 }, 00:11:19.377 { 00:11:19.377 "name": "BaseBdev3", 00:11:19.377 "uuid": "6a6dad04-4fb2-443b-81b8-11a047281cfc", 00:11:19.377 "is_configured": true, 00:11:19.377 "data_offset": 2048, 00:11:19.377 "data_size": 63488 00:11:19.377 }, 00:11:19.377 { 00:11:19.377 "name": "BaseBdev4", 00:11:19.377 "uuid": "31825875-ab21-4135-975b-95152514352a", 00:11:19.377 "is_configured": true, 00:11:19.377 "data_offset": 2048, 00:11:19.377 "data_size": 63488 00:11:19.377 } 00:11:19.377 ] 00:11:19.377 }' 00:11:19.377 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.377 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.637 [2024-09-30 12:28:31.417807] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.637 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.897 [2024-09-30 12:28:31.578268] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.897 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.897 [2024-09-30 12:28:31.738437] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:19.897 [2024-09-30 12:28:31.738495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:20.157 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.157 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:20.157 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:20.157 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.157 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:20.157 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.158 BaseBdev2 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.158 [ 00:11:20.158 { 00:11:20.158 "name": "BaseBdev2", 00:11:20.158 "aliases": [ 00:11:20.158 "6b71a923-dbde-40c3-b990-2af8972a3769" 00:11:20.158 ], 00:11:20.158 "product_name": "Malloc disk", 00:11:20.158 "block_size": 512, 00:11:20.158 "num_blocks": 65536, 00:11:20.158 "uuid": "6b71a923-dbde-40c3-b990-2af8972a3769", 00:11:20.158 "assigned_rate_limits": { 00:11:20.158 "rw_ios_per_sec": 0, 00:11:20.158 "rw_mbytes_per_sec": 0, 00:11:20.158 "r_mbytes_per_sec": 0, 00:11:20.158 "w_mbytes_per_sec": 0 00:11:20.158 }, 00:11:20.158 "claimed": false, 00:11:20.158 "zoned": false, 00:11:20.158 "supported_io_types": { 00:11:20.158 "read": true, 00:11:20.158 "write": true, 00:11:20.158 "unmap": true, 00:11:20.158 "flush": true, 00:11:20.158 "reset": true, 00:11:20.158 "nvme_admin": false, 00:11:20.158 "nvme_io": false, 00:11:20.158 "nvme_io_md": false, 00:11:20.158 "write_zeroes": true, 00:11:20.158 "zcopy": true, 00:11:20.158 "get_zone_info": false, 00:11:20.158 "zone_management": false, 00:11:20.158 "zone_append": false, 00:11:20.158 "compare": false, 00:11:20.158 "compare_and_write": false, 00:11:20.158 "abort": true, 00:11:20.158 "seek_hole": false, 00:11:20.158 "seek_data": false, 00:11:20.158 "copy": true, 00:11:20.158 "nvme_iov_md": false 00:11:20.158 }, 00:11:20.158 "memory_domains": [ 00:11:20.158 { 00:11:20.158 "dma_device_id": "system", 00:11:20.158 "dma_device_type": 1 00:11:20.158 }, 00:11:20.158 { 00:11:20.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.158 "dma_device_type": 2 00:11:20.158 } 00:11:20.158 ], 00:11:20.158 "driver_specific": {} 00:11:20.158 } 00:11:20.158 ] 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.158 12:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.158 BaseBdev3 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.158 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.158 [ 00:11:20.158 { 00:11:20.158 "name": "BaseBdev3", 00:11:20.158 "aliases": [ 00:11:20.158 "d08542a7-e272-413d-8e1b-c26bed49ac83" 00:11:20.158 ], 00:11:20.158 "product_name": "Malloc disk", 00:11:20.158 "block_size": 512, 00:11:20.158 "num_blocks": 65536, 00:11:20.158 "uuid": "d08542a7-e272-413d-8e1b-c26bed49ac83", 00:11:20.158 "assigned_rate_limits": { 00:11:20.158 "rw_ios_per_sec": 0, 00:11:20.158 "rw_mbytes_per_sec": 0, 00:11:20.158 "r_mbytes_per_sec": 0, 00:11:20.158 "w_mbytes_per_sec": 0 00:11:20.158 }, 00:11:20.158 "claimed": false, 00:11:20.158 "zoned": false, 00:11:20.158 "supported_io_types": { 00:11:20.158 "read": true, 00:11:20.158 "write": true, 00:11:20.158 "unmap": true, 00:11:20.158 "flush": true, 00:11:20.158 "reset": true, 00:11:20.158 "nvme_admin": false, 00:11:20.158 "nvme_io": false, 00:11:20.422 "nvme_io_md": false, 00:11:20.422 "write_zeroes": true, 00:11:20.422 "zcopy": true, 00:11:20.422 "get_zone_info": false, 00:11:20.422 "zone_management": false, 00:11:20.422 "zone_append": false, 00:11:20.422 "compare": false, 00:11:20.422 "compare_and_write": false, 00:11:20.422 "abort": true, 00:11:20.422 "seek_hole": false, 00:11:20.422 "seek_data": false, 00:11:20.422 "copy": true, 00:11:20.422 "nvme_iov_md": false 00:11:20.422 }, 00:11:20.422 "memory_domains": [ 00:11:20.422 { 00:11:20.422 "dma_device_id": "system", 00:11:20.422 "dma_device_type": 1 00:11:20.422 }, 00:11:20.422 { 00:11:20.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.422 "dma_device_type": 2 00:11:20.422 } 00:11:20.422 ], 00:11:20.422 "driver_specific": {} 00:11:20.422 } 00:11:20.422 ] 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.422 BaseBdev4 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.422 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.422 [ 00:11:20.422 { 00:11:20.422 "name": "BaseBdev4", 00:11:20.422 "aliases": [ 00:11:20.422 "fa6b8f6c-229f-4a16-b541-0a6b47cedda6" 00:11:20.422 ], 00:11:20.422 "product_name": "Malloc disk", 00:11:20.422 "block_size": 512, 00:11:20.422 "num_blocks": 65536, 00:11:20.422 "uuid": "fa6b8f6c-229f-4a16-b541-0a6b47cedda6", 00:11:20.422 "assigned_rate_limits": { 00:11:20.422 "rw_ios_per_sec": 0, 00:11:20.422 "rw_mbytes_per_sec": 0, 00:11:20.422 "r_mbytes_per_sec": 0, 00:11:20.422 "w_mbytes_per_sec": 0 00:11:20.422 }, 00:11:20.422 "claimed": false, 00:11:20.422 "zoned": false, 00:11:20.422 "supported_io_types": { 00:11:20.422 "read": true, 00:11:20.422 "write": true, 00:11:20.422 "unmap": true, 00:11:20.422 "flush": true, 00:11:20.422 "reset": true, 00:11:20.422 "nvme_admin": false, 00:11:20.423 "nvme_io": false, 00:11:20.423 "nvme_io_md": false, 00:11:20.423 "write_zeroes": true, 00:11:20.423 "zcopy": true, 00:11:20.423 "get_zone_info": false, 00:11:20.423 "zone_management": false, 00:11:20.423 "zone_append": false, 00:11:20.423 "compare": false, 00:11:20.423 "compare_and_write": false, 00:11:20.423 "abort": true, 00:11:20.423 "seek_hole": false, 00:11:20.423 "seek_data": false, 00:11:20.423 "copy": true, 00:11:20.423 "nvme_iov_md": false 00:11:20.423 }, 00:11:20.423 "memory_domains": [ 00:11:20.423 { 00:11:20.423 "dma_device_id": "system", 00:11:20.423 "dma_device_type": 1 00:11:20.423 }, 00:11:20.423 { 00:11:20.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.423 "dma_device_type": 2 00:11:20.423 } 00:11:20.423 ], 00:11:20.423 "driver_specific": {} 00:11:20.423 } 00:11:20.423 ] 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.423 [2024-09-30 12:28:32.149978] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.423 [2024-09-30 12:28:32.150081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.423 [2024-09-30 12:28:32.150122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.423 [2024-09-30 12:28:32.152188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.423 [2024-09-30 12:28:32.152283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.423 "name": "Existed_Raid", 00:11:20.423 "uuid": "662de195-baac-4dd1-b3cc-40c1dd26441b", 00:11:20.423 "strip_size_kb": 64, 00:11:20.423 "state": "configuring", 00:11:20.423 "raid_level": "raid0", 00:11:20.423 "superblock": true, 00:11:20.423 "num_base_bdevs": 4, 00:11:20.423 "num_base_bdevs_discovered": 3, 00:11:20.423 "num_base_bdevs_operational": 4, 00:11:20.423 "base_bdevs_list": [ 00:11:20.423 { 00:11:20.423 "name": "BaseBdev1", 00:11:20.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.423 "is_configured": false, 00:11:20.423 "data_offset": 0, 00:11:20.423 "data_size": 0 00:11:20.423 }, 00:11:20.423 { 00:11:20.423 "name": "BaseBdev2", 00:11:20.423 "uuid": "6b71a923-dbde-40c3-b990-2af8972a3769", 00:11:20.423 "is_configured": true, 00:11:20.423 "data_offset": 2048, 00:11:20.423 "data_size": 63488 00:11:20.423 }, 00:11:20.423 { 00:11:20.423 "name": "BaseBdev3", 00:11:20.423 "uuid": "d08542a7-e272-413d-8e1b-c26bed49ac83", 00:11:20.423 "is_configured": true, 00:11:20.423 "data_offset": 2048, 00:11:20.423 "data_size": 63488 00:11:20.423 }, 00:11:20.423 { 00:11:20.423 "name": "BaseBdev4", 00:11:20.423 "uuid": "fa6b8f6c-229f-4a16-b541-0a6b47cedda6", 00:11:20.423 "is_configured": true, 00:11:20.423 "data_offset": 2048, 00:11:20.423 "data_size": 63488 00:11:20.423 } 00:11:20.423 ] 00:11:20.423 }' 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.423 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.998 [2024-09-30 12:28:32.613171] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.998 "name": "Existed_Raid", 00:11:20.998 "uuid": "662de195-baac-4dd1-b3cc-40c1dd26441b", 00:11:20.998 "strip_size_kb": 64, 00:11:20.998 "state": "configuring", 00:11:20.998 "raid_level": "raid0", 00:11:20.998 "superblock": true, 00:11:20.998 "num_base_bdevs": 4, 00:11:20.998 "num_base_bdevs_discovered": 2, 00:11:20.998 "num_base_bdevs_operational": 4, 00:11:20.998 "base_bdevs_list": [ 00:11:20.998 { 00:11:20.998 "name": "BaseBdev1", 00:11:20.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.998 "is_configured": false, 00:11:20.998 "data_offset": 0, 00:11:20.998 "data_size": 0 00:11:20.998 }, 00:11:20.998 { 00:11:20.998 "name": null, 00:11:20.998 "uuid": "6b71a923-dbde-40c3-b990-2af8972a3769", 00:11:20.998 "is_configured": false, 00:11:20.998 "data_offset": 0, 00:11:20.998 "data_size": 63488 00:11:20.998 }, 00:11:20.998 { 00:11:20.998 "name": "BaseBdev3", 00:11:20.998 "uuid": "d08542a7-e272-413d-8e1b-c26bed49ac83", 00:11:20.998 "is_configured": true, 00:11:20.998 "data_offset": 2048, 00:11:20.998 "data_size": 63488 00:11:20.998 }, 00:11:20.998 { 00:11:20.998 "name": "BaseBdev4", 00:11:20.998 "uuid": "fa6b8f6c-229f-4a16-b541-0a6b47cedda6", 00:11:20.998 "is_configured": true, 00:11:20.998 "data_offset": 2048, 00:11:20.998 "data_size": 63488 00:11:20.998 } 00:11:20.998 ] 00:11:20.998 }' 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.998 12:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.257 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:21.257 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.257 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.257 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.257 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.258 [2024-09-30 12:28:33.113426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.258 BaseBdev1 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.258 [ 00:11:21.258 { 00:11:21.258 "name": "BaseBdev1", 00:11:21.258 "aliases": [ 00:11:21.258 "41bd4fe7-dc44-4309-ba8a-00e3859eabb4" 00:11:21.258 ], 00:11:21.258 "product_name": "Malloc disk", 00:11:21.258 "block_size": 512, 00:11:21.258 "num_blocks": 65536, 00:11:21.258 "uuid": "41bd4fe7-dc44-4309-ba8a-00e3859eabb4", 00:11:21.258 "assigned_rate_limits": { 00:11:21.258 "rw_ios_per_sec": 0, 00:11:21.258 "rw_mbytes_per_sec": 0, 00:11:21.258 "r_mbytes_per_sec": 0, 00:11:21.258 "w_mbytes_per_sec": 0 00:11:21.258 }, 00:11:21.258 "claimed": true, 00:11:21.258 "claim_type": "exclusive_write", 00:11:21.258 "zoned": false, 00:11:21.258 "supported_io_types": { 00:11:21.258 "read": true, 00:11:21.258 "write": true, 00:11:21.258 "unmap": true, 00:11:21.258 "flush": true, 00:11:21.258 "reset": true, 00:11:21.258 "nvme_admin": false, 00:11:21.258 "nvme_io": false, 00:11:21.258 "nvme_io_md": false, 00:11:21.258 "write_zeroes": true, 00:11:21.258 "zcopy": true, 00:11:21.258 "get_zone_info": false, 00:11:21.258 "zone_management": false, 00:11:21.258 "zone_append": false, 00:11:21.258 "compare": false, 00:11:21.258 "compare_and_write": false, 00:11:21.258 "abort": true, 00:11:21.258 "seek_hole": false, 00:11:21.258 "seek_data": false, 00:11:21.258 "copy": true, 00:11:21.258 "nvme_iov_md": false 00:11:21.258 }, 00:11:21.258 "memory_domains": [ 00:11:21.258 { 00:11:21.258 "dma_device_id": "system", 00:11:21.258 "dma_device_type": 1 00:11:21.258 }, 00:11:21.258 { 00:11:21.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.258 "dma_device_type": 2 00:11:21.258 } 00:11:21.258 ], 00:11:21.258 "driver_specific": {} 00:11:21.258 } 00:11:21.258 ] 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.258 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.517 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.517 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.517 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.517 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.517 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.517 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.517 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.517 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.517 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.517 "name": "Existed_Raid", 00:11:21.517 "uuid": "662de195-baac-4dd1-b3cc-40c1dd26441b", 00:11:21.517 "strip_size_kb": 64, 00:11:21.517 "state": "configuring", 00:11:21.517 "raid_level": "raid0", 00:11:21.517 "superblock": true, 00:11:21.517 "num_base_bdevs": 4, 00:11:21.517 "num_base_bdevs_discovered": 3, 00:11:21.517 "num_base_bdevs_operational": 4, 00:11:21.517 "base_bdevs_list": [ 00:11:21.517 { 00:11:21.517 "name": "BaseBdev1", 00:11:21.517 "uuid": "41bd4fe7-dc44-4309-ba8a-00e3859eabb4", 00:11:21.517 "is_configured": true, 00:11:21.517 "data_offset": 2048, 00:11:21.517 "data_size": 63488 00:11:21.517 }, 00:11:21.517 { 00:11:21.517 "name": null, 00:11:21.517 "uuid": "6b71a923-dbde-40c3-b990-2af8972a3769", 00:11:21.517 "is_configured": false, 00:11:21.517 "data_offset": 0, 00:11:21.517 "data_size": 63488 00:11:21.517 }, 00:11:21.517 { 00:11:21.517 "name": "BaseBdev3", 00:11:21.517 "uuid": "d08542a7-e272-413d-8e1b-c26bed49ac83", 00:11:21.517 "is_configured": true, 00:11:21.517 "data_offset": 2048, 00:11:21.517 "data_size": 63488 00:11:21.517 }, 00:11:21.517 { 00:11:21.517 "name": "BaseBdev4", 00:11:21.517 "uuid": "fa6b8f6c-229f-4a16-b541-0a6b47cedda6", 00:11:21.517 "is_configured": true, 00:11:21.517 "data_offset": 2048, 00:11:21.517 "data_size": 63488 00:11:21.517 } 00:11:21.517 ] 00:11:21.517 }' 00:11:21.518 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.518 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.777 [2024-09-30 12:28:33.616581] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.777 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.036 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.036 "name": "Existed_Raid", 00:11:22.036 "uuid": "662de195-baac-4dd1-b3cc-40c1dd26441b", 00:11:22.036 "strip_size_kb": 64, 00:11:22.036 "state": "configuring", 00:11:22.036 "raid_level": "raid0", 00:11:22.036 "superblock": true, 00:11:22.036 "num_base_bdevs": 4, 00:11:22.036 "num_base_bdevs_discovered": 2, 00:11:22.036 "num_base_bdevs_operational": 4, 00:11:22.036 "base_bdevs_list": [ 00:11:22.036 { 00:11:22.036 "name": "BaseBdev1", 00:11:22.036 "uuid": "41bd4fe7-dc44-4309-ba8a-00e3859eabb4", 00:11:22.036 "is_configured": true, 00:11:22.036 "data_offset": 2048, 00:11:22.036 "data_size": 63488 00:11:22.036 }, 00:11:22.036 { 00:11:22.036 "name": null, 00:11:22.036 "uuid": "6b71a923-dbde-40c3-b990-2af8972a3769", 00:11:22.036 "is_configured": false, 00:11:22.036 "data_offset": 0, 00:11:22.036 "data_size": 63488 00:11:22.036 }, 00:11:22.036 { 00:11:22.036 "name": null, 00:11:22.036 "uuid": "d08542a7-e272-413d-8e1b-c26bed49ac83", 00:11:22.036 "is_configured": false, 00:11:22.036 "data_offset": 0, 00:11:22.036 "data_size": 63488 00:11:22.036 }, 00:11:22.036 { 00:11:22.036 "name": "BaseBdev4", 00:11:22.036 "uuid": "fa6b8f6c-229f-4a16-b541-0a6b47cedda6", 00:11:22.036 "is_configured": true, 00:11:22.036 "data_offset": 2048, 00:11:22.036 "data_size": 63488 00:11:22.036 } 00:11:22.036 ] 00:11:22.036 }' 00:11:22.036 12:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.036 12:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.299 [2024-09-30 12:28:34.091847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.299 "name": "Existed_Raid", 00:11:22.299 "uuid": "662de195-baac-4dd1-b3cc-40c1dd26441b", 00:11:22.299 "strip_size_kb": 64, 00:11:22.299 "state": "configuring", 00:11:22.299 "raid_level": "raid0", 00:11:22.299 "superblock": true, 00:11:22.299 "num_base_bdevs": 4, 00:11:22.299 "num_base_bdevs_discovered": 3, 00:11:22.299 "num_base_bdevs_operational": 4, 00:11:22.299 "base_bdevs_list": [ 00:11:22.299 { 00:11:22.299 "name": "BaseBdev1", 00:11:22.299 "uuid": "41bd4fe7-dc44-4309-ba8a-00e3859eabb4", 00:11:22.299 "is_configured": true, 00:11:22.299 "data_offset": 2048, 00:11:22.299 "data_size": 63488 00:11:22.299 }, 00:11:22.299 { 00:11:22.299 "name": null, 00:11:22.299 "uuid": "6b71a923-dbde-40c3-b990-2af8972a3769", 00:11:22.299 "is_configured": false, 00:11:22.299 "data_offset": 0, 00:11:22.299 "data_size": 63488 00:11:22.299 }, 00:11:22.299 { 00:11:22.299 "name": "BaseBdev3", 00:11:22.299 "uuid": "d08542a7-e272-413d-8e1b-c26bed49ac83", 00:11:22.299 "is_configured": true, 00:11:22.299 "data_offset": 2048, 00:11:22.299 "data_size": 63488 00:11:22.299 }, 00:11:22.299 { 00:11:22.299 "name": "BaseBdev4", 00:11:22.299 "uuid": "fa6b8f6c-229f-4a16-b541-0a6b47cedda6", 00:11:22.299 "is_configured": true, 00:11:22.299 "data_offset": 2048, 00:11:22.299 "data_size": 63488 00:11:22.299 } 00:11:22.299 ] 00:11:22.299 }' 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.299 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.868 [2024-09-30 12:28:34.499162] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.868 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.868 "name": "Existed_Raid", 00:11:22.868 "uuid": "662de195-baac-4dd1-b3cc-40c1dd26441b", 00:11:22.868 "strip_size_kb": 64, 00:11:22.868 "state": "configuring", 00:11:22.868 "raid_level": "raid0", 00:11:22.868 "superblock": true, 00:11:22.868 "num_base_bdevs": 4, 00:11:22.868 "num_base_bdevs_discovered": 2, 00:11:22.868 "num_base_bdevs_operational": 4, 00:11:22.868 "base_bdevs_list": [ 00:11:22.868 { 00:11:22.868 "name": null, 00:11:22.868 "uuid": "41bd4fe7-dc44-4309-ba8a-00e3859eabb4", 00:11:22.868 "is_configured": false, 00:11:22.868 "data_offset": 0, 00:11:22.868 "data_size": 63488 00:11:22.868 }, 00:11:22.868 { 00:11:22.869 "name": null, 00:11:22.869 "uuid": "6b71a923-dbde-40c3-b990-2af8972a3769", 00:11:22.869 "is_configured": false, 00:11:22.869 "data_offset": 0, 00:11:22.869 "data_size": 63488 00:11:22.869 }, 00:11:22.869 { 00:11:22.869 "name": "BaseBdev3", 00:11:22.869 "uuid": "d08542a7-e272-413d-8e1b-c26bed49ac83", 00:11:22.869 "is_configured": true, 00:11:22.869 "data_offset": 2048, 00:11:22.869 "data_size": 63488 00:11:22.869 }, 00:11:22.869 { 00:11:22.869 "name": "BaseBdev4", 00:11:22.869 "uuid": "fa6b8f6c-229f-4a16-b541-0a6b47cedda6", 00:11:22.869 "is_configured": true, 00:11:22.869 "data_offset": 2048, 00:11:22.869 "data_size": 63488 00:11:22.869 } 00:11:22.869 ] 00:11:22.869 }' 00:11:22.869 12:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.869 12:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.128 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.128 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:23.128 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.128 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.388 [2024-09-30 12:28:35.049461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.388 "name": "Existed_Raid", 00:11:23.388 "uuid": "662de195-baac-4dd1-b3cc-40c1dd26441b", 00:11:23.388 "strip_size_kb": 64, 00:11:23.388 "state": "configuring", 00:11:23.388 "raid_level": "raid0", 00:11:23.388 "superblock": true, 00:11:23.388 "num_base_bdevs": 4, 00:11:23.388 "num_base_bdevs_discovered": 3, 00:11:23.388 "num_base_bdevs_operational": 4, 00:11:23.388 "base_bdevs_list": [ 00:11:23.388 { 00:11:23.388 "name": null, 00:11:23.388 "uuid": "41bd4fe7-dc44-4309-ba8a-00e3859eabb4", 00:11:23.388 "is_configured": false, 00:11:23.388 "data_offset": 0, 00:11:23.388 "data_size": 63488 00:11:23.388 }, 00:11:23.388 { 00:11:23.388 "name": "BaseBdev2", 00:11:23.388 "uuid": "6b71a923-dbde-40c3-b990-2af8972a3769", 00:11:23.388 "is_configured": true, 00:11:23.388 "data_offset": 2048, 00:11:23.388 "data_size": 63488 00:11:23.388 }, 00:11:23.388 { 00:11:23.388 "name": "BaseBdev3", 00:11:23.388 "uuid": "d08542a7-e272-413d-8e1b-c26bed49ac83", 00:11:23.388 "is_configured": true, 00:11:23.388 "data_offset": 2048, 00:11:23.388 "data_size": 63488 00:11:23.388 }, 00:11:23.388 { 00:11:23.388 "name": "BaseBdev4", 00:11:23.388 "uuid": "fa6b8f6c-229f-4a16-b541-0a6b47cedda6", 00:11:23.388 "is_configured": true, 00:11:23.388 "data_offset": 2048, 00:11:23.388 "data_size": 63488 00:11:23.388 } 00:11:23.388 ] 00:11:23.388 }' 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.388 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 41bd4fe7-dc44-4309-ba8a-00e3859eabb4 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.648 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.907 [2024-09-30 12:28:35.561948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:23.907 [2024-09-30 12:28:35.562261] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:23.907 [2024-09-30 12:28:35.562311] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:23.907 [2024-09-30 12:28:35.562615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:23.907 [2024-09-30 12:28:35.562813] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:23.907 [2024-09-30 12:28:35.562856] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:23.907 NewBaseBdev 00:11:23.907 [2024-09-30 12:28:35.563031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.907 [ 00:11:23.907 { 00:11:23.907 "name": "NewBaseBdev", 00:11:23.907 "aliases": [ 00:11:23.907 "41bd4fe7-dc44-4309-ba8a-00e3859eabb4" 00:11:23.907 ], 00:11:23.907 "product_name": "Malloc disk", 00:11:23.907 "block_size": 512, 00:11:23.907 "num_blocks": 65536, 00:11:23.907 "uuid": "41bd4fe7-dc44-4309-ba8a-00e3859eabb4", 00:11:23.907 "assigned_rate_limits": { 00:11:23.907 "rw_ios_per_sec": 0, 00:11:23.907 "rw_mbytes_per_sec": 0, 00:11:23.907 "r_mbytes_per_sec": 0, 00:11:23.907 "w_mbytes_per_sec": 0 00:11:23.907 }, 00:11:23.907 "claimed": true, 00:11:23.907 "claim_type": "exclusive_write", 00:11:23.907 "zoned": false, 00:11:23.907 "supported_io_types": { 00:11:23.907 "read": true, 00:11:23.907 "write": true, 00:11:23.907 "unmap": true, 00:11:23.907 "flush": true, 00:11:23.907 "reset": true, 00:11:23.907 "nvme_admin": false, 00:11:23.907 "nvme_io": false, 00:11:23.907 "nvme_io_md": false, 00:11:23.907 "write_zeroes": true, 00:11:23.907 "zcopy": true, 00:11:23.907 "get_zone_info": false, 00:11:23.907 "zone_management": false, 00:11:23.907 "zone_append": false, 00:11:23.907 "compare": false, 00:11:23.907 "compare_and_write": false, 00:11:23.907 "abort": true, 00:11:23.907 "seek_hole": false, 00:11:23.907 "seek_data": false, 00:11:23.907 "copy": true, 00:11:23.907 "nvme_iov_md": false 00:11:23.907 }, 00:11:23.907 "memory_domains": [ 00:11:23.907 { 00:11:23.907 "dma_device_id": "system", 00:11:23.907 "dma_device_type": 1 00:11:23.907 }, 00:11:23.907 { 00:11:23.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.907 "dma_device_type": 2 00:11:23.907 } 00:11:23.907 ], 00:11:23.907 "driver_specific": {} 00:11:23.907 } 00:11:23.907 ] 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.907 "name": "Existed_Raid", 00:11:23.907 "uuid": "662de195-baac-4dd1-b3cc-40c1dd26441b", 00:11:23.907 "strip_size_kb": 64, 00:11:23.907 "state": "online", 00:11:23.907 "raid_level": "raid0", 00:11:23.907 "superblock": true, 00:11:23.907 "num_base_bdevs": 4, 00:11:23.907 "num_base_bdevs_discovered": 4, 00:11:23.907 "num_base_bdevs_operational": 4, 00:11:23.907 "base_bdevs_list": [ 00:11:23.907 { 00:11:23.907 "name": "NewBaseBdev", 00:11:23.907 "uuid": "41bd4fe7-dc44-4309-ba8a-00e3859eabb4", 00:11:23.907 "is_configured": true, 00:11:23.907 "data_offset": 2048, 00:11:23.907 "data_size": 63488 00:11:23.907 }, 00:11:23.907 { 00:11:23.907 "name": "BaseBdev2", 00:11:23.907 "uuid": "6b71a923-dbde-40c3-b990-2af8972a3769", 00:11:23.907 "is_configured": true, 00:11:23.907 "data_offset": 2048, 00:11:23.907 "data_size": 63488 00:11:23.907 }, 00:11:23.907 { 00:11:23.907 "name": "BaseBdev3", 00:11:23.907 "uuid": "d08542a7-e272-413d-8e1b-c26bed49ac83", 00:11:23.907 "is_configured": true, 00:11:23.907 "data_offset": 2048, 00:11:23.907 "data_size": 63488 00:11:23.907 }, 00:11:23.907 { 00:11:23.907 "name": "BaseBdev4", 00:11:23.907 "uuid": "fa6b8f6c-229f-4a16-b541-0a6b47cedda6", 00:11:23.907 "is_configured": true, 00:11:23.907 "data_offset": 2048, 00:11:23.907 "data_size": 63488 00:11:23.907 } 00:11:23.907 ] 00:11:23.907 }' 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.907 12:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.166 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.166 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.166 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.166 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.166 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.166 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.166 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.166 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.166 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.166 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.166 [2024-09-30 12:28:36.037476] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.166 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.425 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.425 "name": "Existed_Raid", 00:11:24.425 "aliases": [ 00:11:24.425 "662de195-baac-4dd1-b3cc-40c1dd26441b" 00:11:24.425 ], 00:11:24.425 "product_name": "Raid Volume", 00:11:24.426 "block_size": 512, 00:11:24.426 "num_blocks": 253952, 00:11:24.426 "uuid": "662de195-baac-4dd1-b3cc-40c1dd26441b", 00:11:24.426 "assigned_rate_limits": { 00:11:24.426 "rw_ios_per_sec": 0, 00:11:24.426 "rw_mbytes_per_sec": 0, 00:11:24.426 "r_mbytes_per_sec": 0, 00:11:24.426 "w_mbytes_per_sec": 0 00:11:24.426 }, 00:11:24.426 "claimed": false, 00:11:24.426 "zoned": false, 00:11:24.426 "supported_io_types": { 00:11:24.426 "read": true, 00:11:24.426 "write": true, 00:11:24.426 "unmap": true, 00:11:24.426 "flush": true, 00:11:24.426 "reset": true, 00:11:24.426 "nvme_admin": false, 00:11:24.426 "nvme_io": false, 00:11:24.426 "nvme_io_md": false, 00:11:24.426 "write_zeroes": true, 00:11:24.426 "zcopy": false, 00:11:24.426 "get_zone_info": false, 00:11:24.426 "zone_management": false, 00:11:24.426 "zone_append": false, 00:11:24.426 "compare": false, 00:11:24.426 "compare_and_write": false, 00:11:24.426 "abort": false, 00:11:24.426 "seek_hole": false, 00:11:24.426 "seek_data": false, 00:11:24.426 "copy": false, 00:11:24.426 "nvme_iov_md": false 00:11:24.426 }, 00:11:24.426 "memory_domains": [ 00:11:24.426 { 00:11:24.426 "dma_device_id": "system", 00:11:24.426 "dma_device_type": 1 00:11:24.426 }, 00:11:24.426 { 00:11:24.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.426 "dma_device_type": 2 00:11:24.426 }, 00:11:24.426 { 00:11:24.426 "dma_device_id": "system", 00:11:24.426 "dma_device_type": 1 00:11:24.426 }, 00:11:24.426 { 00:11:24.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.426 "dma_device_type": 2 00:11:24.426 }, 00:11:24.426 { 00:11:24.426 "dma_device_id": "system", 00:11:24.426 "dma_device_type": 1 00:11:24.426 }, 00:11:24.426 { 00:11:24.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.426 "dma_device_type": 2 00:11:24.426 }, 00:11:24.426 { 00:11:24.426 "dma_device_id": "system", 00:11:24.426 "dma_device_type": 1 00:11:24.426 }, 00:11:24.426 { 00:11:24.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.426 "dma_device_type": 2 00:11:24.426 } 00:11:24.426 ], 00:11:24.426 "driver_specific": { 00:11:24.426 "raid": { 00:11:24.426 "uuid": "662de195-baac-4dd1-b3cc-40c1dd26441b", 00:11:24.426 "strip_size_kb": 64, 00:11:24.426 "state": "online", 00:11:24.426 "raid_level": "raid0", 00:11:24.426 "superblock": true, 00:11:24.426 "num_base_bdevs": 4, 00:11:24.426 "num_base_bdevs_discovered": 4, 00:11:24.426 "num_base_bdevs_operational": 4, 00:11:24.426 "base_bdevs_list": [ 00:11:24.426 { 00:11:24.426 "name": "NewBaseBdev", 00:11:24.426 "uuid": "41bd4fe7-dc44-4309-ba8a-00e3859eabb4", 00:11:24.426 "is_configured": true, 00:11:24.426 "data_offset": 2048, 00:11:24.426 "data_size": 63488 00:11:24.426 }, 00:11:24.426 { 00:11:24.426 "name": "BaseBdev2", 00:11:24.426 "uuid": "6b71a923-dbde-40c3-b990-2af8972a3769", 00:11:24.426 "is_configured": true, 00:11:24.426 "data_offset": 2048, 00:11:24.426 "data_size": 63488 00:11:24.426 }, 00:11:24.426 { 00:11:24.426 "name": "BaseBdev3", 00:11:24.426 "uuid": "d08542a7-e272-413d-8e1b-c26bed49ac83", 00:11:24.426 "is_configured": true, 00:11:24.426 "data_offset": 2048, 00:11:24.426 "data_size": 63488 00:11:24.426 }, 00:11:24.426 { 00:11:24.426 "name": "BaseBdev4", 00:11:24.426 "uuid": "fa6b8f6c-229f-4a16-b541-0a6b47cedda6", 00:11:24.426 "is_configured": true, 00:11:24.426 "data_offset": 2048, 00:11:24.426 "data_size": 63488 00:11:24.426 } 00:11:24.426 ] 00:11:24.426 } 00:11:24.426 } 00:11:24.426 }' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:24.426 BaseBdev2 00:11:24.426 BaseBdev3 00:11:24.426 BaseBdev4' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.426 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.686 [2024-09-30 12:28:36.336636] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.686 [2024-09-30 12:28:36.336669] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.686 [2024-09-30 12:28:36.336766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.686 [2024-09-30 12:28:36.336839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.686 [2024-09-30 12:28:36.336850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69925 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 69925 ']' 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 69925 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69925 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69925' 00:11:24.686 killing process with pid 69925 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 69925 00:11:24.686 [2024-09-30 12:28:36.385983] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.686 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 69925 00:11:24.947 [2024-09-30 12:28:36.795940] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.327 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:26.327 00:11:26.327 real 0m11.495s 00:11:26.327 user 0m17.817s 00:11:26.327 sys 0m2.146s 00:11:26.327 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.327 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.327 ************************************ 00:11:26.327 END TEST raid_state_function_test_sb 00:11:26.327 ************************************ 00:11:26.327 12:28:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:26.327 12:28:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:26.327 12:28:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.327 12:28:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.327 ************************************ 00:11:26.327 START TEST raid_superblock_test 00:11:26.327 ************************************ 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70596 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70596 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 70596 ']' 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.327 12:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.587 [2024-09-30 12:28:38.286619] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:26.587 [2024-09-30 12:28:38.286840] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70596 ] 00:11:26.587 [2024-09-30 12:28:38.449507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.847 [2024-09-30 12:28:38.689273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.106 [2024-09-30 12:28:38.912419] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.106 [2024-09-30 12:28:38.912554] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.365 malloc1 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.365 [2024-09-30 12:28:39.161099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:27.365 [2024-09-30 12:28:39.161236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.365 [2024-09-30 12:28:39.161276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:27.365 [2024-09-30 12:28:39.161305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.365 [2024-09-30 12:28:39.163634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.365 [2024-09-30 12:28:39.163702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:27.365 pt1 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.365 malloc2 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.365 [2024-09-30 12:28:39.235117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.365 [2024-09-30 12:28:39.235221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.365 [2024-09-30 12:28:39.235260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:27.365 [2024-09-30 12:28:39.235301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.365 [2024-09-30 12:28:39.237669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.365 [2024-09-30 12:28:39.237738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.365 pt2 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.365 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.625 malloc3 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.625 [2024-09-30 12:28:39.300372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:27.625 [2024-09-30 12:28:39.300425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.625 [2024-09-30 12:28:39.300445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:27.625 [2024-09-30 12:28:39.300454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.625 [2024-09-30 12:28:39.302949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.625 [2024-09-30 12:28:39.302983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:27.625 pt3 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.625 malloc4 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.625 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.626 [2024-09-30 12:28:39.361672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:27.626 [2024-09-30 12:28:39.361814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.626 [2024-09-30 12:28:39.361853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:27.626 [2024-09-30 12:28:39.361882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.626 [2024-09-30 12:28:39.364162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.626 [2024-09-30 12:28:39.364230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:27.626 pt4 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.626 [2024-09-30 12:28:39.373691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:27.626 [2024-09-30 12:28:39.375773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.626 [2024-09-30 12:28:39.375873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:27.626 [2024-09-30 12:28:39.375955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:27.626 [2024-09-30 12:28:39.376175] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:27.626 [2024-09-30 12:28:39.376224] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:27.626 [2024-09-30 12:28:39.376490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:27.626 [2024-09-30 12:28:39.376679] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:27.626 [2024-09-30 12:28:39.376734] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:27.626 [2024-09-30 12:28:39.376948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.626 "name": "raid_bdev1", 00:11:27.626 "uuid": "0ca55405-acca-40d3-9140-20b945def956", 00:11:27.626 "strip_size_kb": 64, 00:11:27.626 "state": "online", 00:11:27.626 "raid_level": "raid0", 00:11:27.626 "superblock": true, 00:11:27.626 "num_base_bdevs": 4, 00:11:27.626 "num_base_bdevs_discovered": 4, 00:11:27.626 "num_base_bdevs_operational": 4, 00:11:27.626 "base_bdevs_list": [ 00:11:27.626 { 00:11:27.626 "name": "pt1", 00:11:27.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.626 "is_configured": true, 00:11:27.626 "data_offset": 2048, 00:11:27.626 "data_size": 63488 00:11:27.626 }, 00:11:27.626 { 00:11:27.626 "name": "pt2", 00:11:27.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.626 "is_configured": true, 00:11:27.626 "data_offset": 2048, 00:11:27.626 "data_size": 63488 00:11:27.626 }, 00:11:27.626 { 00:11:27.626 "name": "pt3", 00:11:27.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.626 "is_configured": true, 00:11:27.626 "data_offset": 2048, 00:11:27.626 "data_size": 63488 00:11:27.626 }, 00:11:27.626 { 00:11:27.626 "name": "pt4", 00:11:27.626 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.626 "is_configured": true, 00:11:27.626 "data_offset": 2048, 00:11:27.626 "data_size": 63488 00:11:27.626 } 00:11:27.626 ] 00:11:27.626 }' 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.626 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.196 [2024-09-30 12:28:39.865100] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.196 "name": "raid_bdev1", 00:11:28.196 "aliases": [ 00:11:28.196 "0ca55405-acca-40d3-9140-20b945def956" 00:11:28.196 ], 00:11:28.196 "product_name": "Raid Volume", 00:11:28.196 "block_size": 512, 00:11:28.196 "num_blocks": 253952, 00:11:28.196 "uuid": "0ca55405-acca-40d3-9140-20b945def956", 00:11:28.196 "assigned_rate_limits": { 00:11:28.196 "rw_ios_per_sec": 0, 00:11:28.196 "rw_mbytes_per_sec": 0, 00:11:28.196 "r_mbytes_per_sec": 0, 00:11:28.196 "w_mbytes_per_sec": 0 00:11:28.196 }, 00:11:28.196 "claimed": false, 00:11:28.196 "zoned": false, 00:11:28.196 "supported_io_types": { 00:11:28.196 "read": true, 00:11:28.196 "write": true, 00:11:28.196 "unmap": true, 00:11:28.196 "flush": true, 00:11:28.196 "reset": true, 00:11:28.196 "nvme_admin": false, 00:11:28.196 "nvme_io": false, 00:11:28.196 "nvme_io_md": false, 00:11:28.196 "write_zeroes": true, 00:11:28.196 "zcopy": false, 00:11:28.196 "get_zone_info": false, 00:11:28.196 "zone_management": false, 00:11:28.196 "zone_append": false, 00:11:28.196 "compare": false, 00:11:28.196 "compare_and_write": false, 00:11:28.196 "abort": false, 00:11:28.196 "seek_hole": false, 00:11:28.196 "seek_data": false, 00:11:28.196 "copy": false, 00:11:28.196 "nvme_iov_md": false 00:11:28.196 }, 00:11:28.196 "memory_domains": [ 00:11:28.196 { 00:11:28.196 "dma_device_id": "system", 00:11:28.196 "dma_device_type": 1 00:11:28.196 }, 00:11:28.196 { 00:11:28.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.196 "dma_device_type": 2 00:11:28.196 }, 00:11:28.196 { 00:11:28.196 "dma_device_id": "system", 00:11:28.196 "dma_device_type": 1 00:11:28.196 }, 00:11:28.196 { 00:11:28.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.196 "dma_device_type": 2 00:11:28.196 }, 00:11:28.196 { 00:11:28.196 "dma_device_id": "system", 00:11:28.196 "dma_device_type": 1 00:11:28.196 }, 00:11:28.196 { 00:11:28.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.196 "dma_device_type": 2 00:11:28.196 }, 00:11:28.196 { 00:11:28.196 "dma_device_id": "system", 00:11:28.196 "dma_device_type": 1 00:11:28.196 }, 00:11:28.196 { 00:11:28.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.196 "dma_device_type": 2 00:11:28.196 } 00:11:28.196 ], 00:11:28.196 "driver_specific": { 00:11:28.196 "raid": { 00:11:28.196 "uuid": "0ca55405-acca-40d3-9140-20b945def956", 00:11:28.196 "strip_size_kb": 64, 00:11:28.196 "state": "online", 00:11:28.196 "raid_level": "raid0", 00:11:28.196 "superblock": true, 00:11:28.196 "num_base_bdevs": 4, 00:11:28.196 "num_base_bdevs_discovered": 4, 00:11:28.196 "num_base_bdevs_operational": 4, 00:11:28.196 "base_bdevs_list": [ 00:11:28.196 { 00:11:28.196 "name": "pt1", 00:11:28.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.196 "is_configured": true, 00:11:28.196 "data_offset": 2048, 00:11:28.196 "data_size": 63488 00:11:28.196 }, 00:11:28.196 { 00:11:28.196 "name": "pt2", 00:11:28.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.196 "is_configured": true, 00:11:28.196 "data_offset": 2048, 00:11:28.196 "data_size": 63488 00:11:28.196 }, 00:11:28.196 { 00:11:28.196 "name": "pt3", 00:11:28.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.196 "is_configured": true, 00:11:28.196 "data_offset": 2048, 00:11:28.196 "data_size": 63488 00:11:28.196 }, 00:11:28.196 { 00:11:28.196 "name": "pt4", 00:11:28.196 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.196 "is_configured": true, 00:11:28.196 "data_offset": 2048, 00:11:28.196 "data_size": 63488 00:11:28.196 } 00:11:28.196 ] 00:11:28.196 } 00:11:28.196 } 00:11:28.196 }' 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:28.196 pt2 00:11:28.196 pt3 00:11:28.196 pt4' 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.196 12:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.197 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 [2024-09-30 12:28:40.124575] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0ca55405-acca-40d3-9140-20b945def956 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0ca55405-acca-40d3-9140-20b945def956 ']' 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 [2024-09-30 12:28:40.168231] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.457 [2024-09-30 12:28:40.168256] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.457 [2024-09-30 12:28:40.168323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.457 [2024-09-30 12:28:40.168386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.457 [2024-09-30 12:28:40.168412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:28.457 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.458 [2024-09-30 12:28:40.331964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:28.458 [2024-09-30 12:28:40.334029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:28.458 [2024-09-30 12:28:40.334073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:28.458 [2024-09-30 12:28:40.334104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:28.458 [2024-09-30 12:28:40.334149] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:28.458 [2024-09-30 12:28:40.334194] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:28.458 [2024-09-30 12:28:40.334212] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:28.458 [2024-09-30 12:28:40.334229] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:28.458 [2024-09-30 12:28:40.334242] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.458 [2024-09-30 12:28:40.334252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:28.458 request: 00:11:28.458 { 00:11:28.458 "name": "raid_bdev1", 00:11:28.458 "raid_level": "raid0", 00:11:28.458 "base_bdevs": [ 00:11:28.458 "malloc1", 00:11:28.458 "malloc2", 00:11:28.458 "malloc3", 00:11:28.458 "malloc4" 00:11:28.458 ], 00:11:28.458 "strip_size_kb": 64, 00:11:28.458 "superblock": false, 00:11:28.458 "method": "bdev_raid_create", 00:11:28.458 "req_id": 1 00:11:28.458 } 00:11:28.458 Got JSON-RPC error response 00:11:28.458 response: 00:11:28.458 { 00:11:28.458 "code": -17, 00:11:28.458 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:28.458 } 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.458 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.718 [2024-09-30 12:28:40.399856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:28.718 [2024-09-30 12:28:40.399942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.718 [2024-09-30 12:28:40.399973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:28.718 [2024-09-30 12:28:40.400000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.718 [2024-09-30 12:28:40.402340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.718 [2024-09-30 12:28:40.402410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:28.718 [2024-09-30 12:28:40.402498] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:28.718 [2024-09-30 12:28:40.402571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:28.718 pt1 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.718 "name": "raid_bdev1", 00:11:28.718 "uuid": "0ca55405-acca-40d3-9140-20b945def956", 00:11:28.718 "strip_size_kb": 64, 00:11:28.718 "state": "configuring", 00:11:28.718 "raid_level": "raid0", 00:11:28.718 "superblock": true, 00:11:28.718 "num_base_bdevs": 4, 00:11:28.718 "num_base_bdevs_discovered": 1, 00:11:28.718 "num_base_bdevs_operational": 4, 00:11:28.718 "base_bdevs_list": [ 00:11:28.718 { 00:11:28.718 "name": "pt1", 00:11:28.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.718 "is_configured": true, 00:11:28.718 "data_offset": 2048, 00:11:28.718 "data_size": 63488 00:11:28.718 }, 00:11:28.718 { 00:11:28.718 "name": null, 00:11:28.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.718 "is_configured": false, 00:11:28.718 "data_offset": 2048, 00:11:28.718 "data_size": 63488 00:11:28.718 }, 00:11:28.718 { 00:11:28.718 "name": null, 00:11:28.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.718 "is_configured": false, 00:11:28.718 "data_offset": 2048, 00:11:28.718 "data_size": 63488 00:11:28.718 }, 00:11:28.718 { 00:11:28.718 "name": null, 00:11:28.718 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.718 "is_configured": false, 00:11:28.718 "data_offset": 2048, 00:11:28.718 "data_size": 63488 00:11:28.718 } 00:11:28.718 ] 00:11:28.718 }' 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.718 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.978 [2024-09-30 12:28:40.839167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:28.978 [2024-09-30 12:28:40.839228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.978 [2024-09-30 12:28:40.839246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:28.978 [2024-09-30 12:28:40.839257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.978 [2024-09-30 12:28:40.839752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.978 [2024-09-30 12:28:40.839792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:28.978 [2024-09-30 12:28:40.839871] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:28.978 [2024-09-30 12:28:40.839897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:28.978 pt2 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.978 [2024-09-30 12:28:40.851160] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.978 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.239 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.239 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.239 "name": "raid_bdev1", 00:11:29.239 "uuid": "0ca55405-acca-40d3-9140-20b945def956", 00:11:29.239 "strip_size_kb": 64, 00:11:29.239 "state": "configuring", 00:11:29.239 "raid_level": "raid0", 00:11:29.239 "superblock": true, 00:11:29.239 "num_base_bdevs": 4, 00:11:29.239 "num_base_bdevs_discovered": 1, 00:11:29.239 "num_base_bdevs_operational": 4, 00:11:29.239 "base_bdevs_list": [ 00:11:29.239 { 00:11:29.239 "name": "pt1", 00:11:29.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.239 "is_configured": true, 00:11:29.239 "data_offset": 2048, 00:11:29.239 "data_size": 63488 00:11:29.239 }, 00:11:29.239 { 00:11:29.239 "name": null, 00:11:29.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.239 "is_configured": false, 00:11:29.239 "data_offset": 0, 00:11:29.239 "data_size": 63488 00:11:29.239 }, 00:11:29.239 { 00:11:29.239 "name": null, 00:11:29.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.239 "is_configured": false, 00:11:29.239 "data_offset": 2048, 00:11:29.239 "data_size": 63488 00:11:29.239 }, 00:11:29.239 { 00:11:29.239 "name": null, 00:11:29.239 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.239 "is_configured": false, 00:11:29.239 "data_offset": 2048, 00:11:29.239 "data_size": 63488 00:11:29.239 } 00:11:29.239 ] 00:11:29.239 }' 00:11:29.239 12:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.239 12:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.499 [2024-09-30 12:28:41.342308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:29.499 [2024-09-30 12:28:41.342403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.499 [2024-09-30 12:28:41.342439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:29.499 [2024-09-30 12:28:41.342467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.499 [2024-09-30 12:28:41.342923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.499 [2024-09-30 12:28:41.342976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:29.499 [2024-09-30 12:28:41.343071] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:29.499 [2024-09-30 12:28:41.343126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:29.499 pt2 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.499 [2024-09-30 12:28:41.354284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:29.499 [2024-09-30 12:28:41.354360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.499 [2024-09-30 12:28:41.354397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:29.499 [2024-09-30 12:28:41.354409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.499 [2024-09-30 12:28:41.354767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.499 [2024-09-30 12:28:41.354784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:29.499 [2024-09-30 12:28:41.354841] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:29.499 [2024-09-30 12:28:41.354857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:29.499 pt3 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.499 [2024-09-30 12:28:41.366237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:29.499 [2024-09-30 12:28:41.366278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.499 [2024-09-30 12:28:41.366296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:29.499 [2024-09-30 12:28:41.366303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.499 [2024-09-30 12:28:41.366657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.499 [2024-09-30 12:28:41.366671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:29.499 [2024-09-30 12:28:41.366723] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:29.499 [2024-09-30 12:28:41.366761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:29.499 [2024-09-30 12:28:41.366900] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:29.499 [2024-09-30 12:28:41.366909] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:29.499 [2024-09-30 12:28:41.367153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:29.499 [2024-09-30 12:28:41.367297] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:29.499 [2024-09-30 12:28:41.367309] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:29.499 [2024-09-30 12:28:41.367459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.499 pt4 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.499 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.759 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.759 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.759 "name": "raid_bdev1", 00:11:29.759 "uuid": "0ca55405-acca-40d3-9140-20b945def956", 00:11:29.759 "strip_size_kb": 64, 00:11:29.759 "state": "online", 00:11:29.759 "raid_level": "raid0", 00:11:29.759 "superblock": true, 00:11:29.759 "num_base_bdevs": 4, 00:11:29.759 "num_base_bdevs_discovered": 4, 00:11:29.759 "num_base_bdevs_operational": 4, 00:11:29.759 "base_bdevs_list": [ 00:11:29.759 { 00:11:29.759 "name": "pt1", 00:11:29.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.759 "is_configured": true, 00:11:29.759 "data_offset": 2048, 00:11:29.759 "data_size": 63488 00:11:29.759 }, 00:11:29.759 { 00:11:29.759 "name": "pt2", 00:11:29.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.759 "is_configured": true, 00:11:29.759 "data_offset": 2048, 00:11:29.759 "data_size": 63488 00:11:29.759 }, 00:11:29.759 { 00:11:29.759 "name": "pt3", 00:11:29.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.759 "is_configured": true, 00:11:29.759 "data_offset": 2048, 00:11:29.759 "data_size": 63488 00:11:29.759 }, 00:11:29.759 { 00:11:29.759 "name": "pt4", 00:11:29.759 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.759 "is_configured": true, 00:11:29.759 "data_offset": 2048, 00:11:29.759 "data_size": 63488 00:11:29.759 } 00:11:29.760 ] 00:11:29.760 }' 00:11:29.760 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.760 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.019 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.020 [2024-09-30 12:28:41.769917] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.020 "name": "raid_bdev1", 00:11:30.020 "aliases": [ 00:11:30.020 "0ca55405-acca-40d3-9140-20b945def956" 00:11:30.020 ], 00:11:30.020 "product_name": "Raid Volume", 00:11:30.020 "block_size": 512, 00:11:30.020 "num_blocks": 253952, 00:11:30.020 "uuid": "0ca55405-acca-40d3-9140-20b945def956", 00:11:30.020 "assigned_rate_limits": { 00:11:30.020 "rw_ios_per_sec": 0, 00:11:30.020 "rw_mbytes_per_sec": 0, 00:11:30.020 "r_mbytes_per_sec": 0, 00:11:30.020 "w_mbytes_per_sec": 0 00:11:30.020 }, 00:11:30.020 "claimed": false, 00:11:30.020 "zoned": false, 00:11:30.020 "supported_io_types": { 00:11:30.020 "read": true, 00:11:30.020 "write": true, 00:11:30.020 "unmap": true, 00:11:30.020 "flush": true, 00:11:30.020 "reset": true, 00:11:30.020 "nvme_admin": false, 00:11:30.020 "nvme_io": false, 00:11:30.020 "nvme_io_md": false, 00:11:30.020 "write_zeroes": true, 00:11:30.020 "zcopy": false, 00:11:30.020 "get_zone_info": false, 00:11:30.020 "zone_management": false, 00:11:30.020 "zone_append": false, 00:11:30.020 "compare": false, 00:11:30.020 "compare_and_write": false, 00:11:30.020 "abort": false, 00:11:30.020 "seek_hole": false, 00:11:30.020 "seek_data": false, 00:11:30.020 "copy": false, 00:11:30.020 "nvme_iov_md": false 00:11:30.020 }, 00:11:30.020 "memory_domains": [ 00:11:30.020 { 00:11:30.020 "dma_device_id": "system", 00:11:30.020 "dma_device_type": 1 00:11:30.020 }, 00:11:30.020 { 00:11:30.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.020 "dma_device_type": 2 00:11:30.020 }, 00:11:30.020 { 00:11:30.020 "dma_device_id": "system", 00:11:30.020 "dma_device_type": 1 00:11:30.020 }, 00:11:30.020 { 00:11:30.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.020 "dma_device_type": 2 00:11:30.020 }, 00:11:30.020 { 00:11:30.020 "dma_device_id": "system", 00:11:30.020 "dma_device_type": 1 00:11:30.020 }, 00:11:30.020 { 00:11:30.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.020 "dma_device_type": 2 00:11:30.020 }, 00:11:30.020 { 00:11:30.020 "dma_device_id": "system", 00:11:30.020 "dma_device_type": 1 00:11:30.020 }, 00:11:30.020 { 00:11:30.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.020 "dma_device_type": 2 00:11:30.020 } 00:11:30.020 ], 00:11:30.020 "driver_specific": { 00:11:30.020 "raid": { 00:11:30.020 "uuid": "0ca55405-acca-40d3-9140-20b945def956", 00:11:30.020 "strip_size_kb": 64, 00:11:30.020 "state": "online", 00:11:30.020 "raid_level": "raid0", 00:11:30.020 "superblock": true, 00:11:30.020 "num_base_bdevs": 4, 00:11:30.020 "num_base_bdevs_discovered": 4, 00:11:30.020 "num_base_bdevs_operational": 4, 00:11:30.020 "base_bdevs_list": [ 00:11:30.020 { 00:11:30.020 "name": "pt1", 00:11:30.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:30.020 "is_configured": true, 00:11:30.020 "data_offset": 2048, 00:11:30.020 "data_size": 63488 00:11:30.020 }, 00:11:30.020 { 00:11:30.020 "name": "pt2", 00:11:30.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:30.020 "is_configured": true, 00:11:30.020 "data_offset": 2048, 00:11:30.020 "data_size": 63488 00:11:30.020 }, 00:11:30.020 { 00:11:30.020 "name": "pt3", 00:11:30.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:30.020 "is_configured": true, 00:11:30.020 "data_offset": 2048, 00:11:30.020 "data_size": 63488 00:11:30.020 }, 00:11:30.020 { 00:11:30.020 "name": "pt4", 00:11:30.020 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:30.020 "is_configured": true, 00:11:30.020 "data_offset": 2048, 00:11:30.020 "data_size": 63488 00:11:30.020 } 00:11:30.020 ] 00:11:30.020 } 00:11:30.020 } 00:11:30.020 }' 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:30.020 pt2 00:11:30.020 pt3 00:11:30.020 pt4' 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.020 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:30.280 12:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.281 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.281 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.281 12:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.281 [2024-09-30 12:28:42.069299] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0ca55405-acca-40d3-9140-20b945def956 '!=' 0ca55405-acca-40d3-9140-20b945def956 ']' 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70596 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 70596 ']' 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 70596 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70596 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70596' 00:11:30.281 killing process with pid 70596 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 70596 00:11:30.281 [2024-09-30 12:28:42.130453] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.281 [2024-09-30 12:28:42.130585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.281 12:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 70596 00:11:30.281 [2024-09-30 12:28:42.130678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.281 [2024-09-30 12:28:42.130690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:30.851 [2024-09-30 12:28:42.546811] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.232 12:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:32.232 00:11:32.232 real 0m5.673s 00:11:32.232 user 0m7.831s 00:11:32.232 sys 0m1.081s 00:11:32.232 12:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.232 ************************************ 00:11:32.232 END TEST raid_superblock_test 00:11:32.232 ************************************ 00:11:32.232 12:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.232 12:28:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:32.232 12:28:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:32.232 12:28:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.232 12:28:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.232 ************************************ 00:11:32.232 START TEST raid_read_error_test 00:11:32.232 ************************************ 00:11:32.232 12:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:11:32.232 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:32.232 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:32.232 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:32.232 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qUcgBfiWYa 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70861 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70861 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 70861 ']' 00:11:32.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.233 12:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.233 [2024-09-30 12:28:44.049153] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:32.233 [2024-09-30 12:28:44.049271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70861 ] 00:11:32.493 [2024-09-30 12:28:44.211428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.753 [2024-09-30 12:28:44.451336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.013 [2024-09-30 12:28:44.675885] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.013 [2024-09-30 12:28:44.675924] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.013 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.013 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:33.013 12:28:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.013 12:28:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:33.013 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.013 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.273 BaseBdev1_malloc 00:11:33.273 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.273 12:28:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:33.273 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.273 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.273 true 00:11:33.274 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.274 12:28:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:33.274 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.274 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.274 [2024-09-30 12:28:44.931801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:33.274 [2024-09-30 12:28:44.931914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.274 [2024-09-30 12:28:44.931937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:33.274 [2024-09-30 12:28:44.931951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.274 [2024-09-30 12:28:44.934302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.274 [2024-09-30 12:28:44.934340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:33.274 BaseBdev1 00:11:33.274 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.274 12:28:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.274 12:28:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:33.274 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.274 12:28:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.274 BaseBdev2_malloc 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.274 true 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.274 [2024-09-30 12:28:45.031785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:33.274 [2024-09-30 12:28:45.031844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.274 [2024-09-30 12:28:45.031860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:33.274 [2024-09-30 12:28:45.031872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.274 [2024-09-30 12:28:45.034139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.274 [2024-09-30 12:28:45.034176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:33.274 BaseBdev2 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.274 BaseBdev3_malloc 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.274 true 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.274 [2024-09-30 12:28:45.104495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:33.274 [2024-09-30 12:28:45.104610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.274 [2024-09-30 12:28:45.104631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:33.274 [2024-09-30 12:28:45.104642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.274 [2024-09-30 12:28:45.107020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.274 [2024-09-30 12:28:45.107056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:33.274 BaseBdev3 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.274 BaseBdev4_malloc 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.274 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.536 true 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.536 [2024-09-30 12:28:45.175229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:33.536 [2024-09-30 12:28:45.175279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.536 [2024-09-30 12:28:45.175297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:33.536 [2024-09-30 12:28:45.175311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.536 [2024-09-30 12:28:45.177678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.536 [2024-09-30 12:28:45.177716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:33.536 BaseBdev4 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.536 [2024-09-30 12:28:45.187308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.536 [2024-09-30 12:28:45.189473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.536 [2024-09-30 12:28:45.189546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.536 [2024-09-30 12:28:45.189602] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.536 [2024-09-30 12:28:45.189847] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:33.536 [2024-09-30 12:28:45.189863] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:33.536 [2024-09-30 12:28:45.190101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:33.536 [2024-09-30 12:28:45.190261] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:33.536 [2024-09-30 12:28:45.190270] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:33.536 [2024-09-30 12:28:45.190432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.536 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.537 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.537 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.537 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.537 "name": "raid_bdev1", 00:11:33.537 "uuid": "d802bc65-af52-45ce-aa8e-a3075ad36c44", 00:11:33.537 "strip_size_kb": 64, 00:11:33.537 "state": "online", 00:11:33.537 "raid_level": "raid0", 00:11:33.537 "superblock": true, 00:11:33.537 "num_base_bdevs": 4, 00:11:33.537 "num_base_bdevs_discovered": 4, 00:11:33.537 "num_base_bdevs_operational": 4, 00:11:33.537 "base_bdevs_list": [ 00:11:33.537 { 00:11:33.537 "name": "BaseBdev1", 00:11:33.537 "uuid": "50d4989d-abca-5475-8078-f20da7c116c2", 00:11:33.537 "is_configured": true, 00:11:33.537 "data_offset": 2048, 00:11:33.537 "data_size": 63488 00:11:33.537 }, 00:11:33.537 { 00:11:33.537 "name": "BaseBdev2", 00:11:33.537 "uuid": "9350845a-452b-5243-8d07-0ecf7cc0e82d", 00:11:33.537 "is_configured": true, 00:11:33.537 "data_offset": 2048, 00:11:33.537 "data_size": 63488 00:11:33.537 }, 00:11:33.537 { 00:11:33.537 "name": "BaseBdev3", 00:11:33.537 "uuid": "45a444f4-cec8-50e2-bd8a-df0637125ed9", 00:11:33.537 "is_configured": true, 00:11:33.537 "data_offset": 2048, 00:11:33.537 "data_size": 63488 00:11:33.537 }, 00:11:33.537 { 00:11:33.537 "name": "BaseBdev4", 00:11:33.537 "uuid": "4c453889-303a-56f4-8392-8f972464a4cc", 00:11:33.537 "is_configured": true, 00:11:33.537 "data_offset": 2048, 00:11:33.537 "data_size": 63488 00:11:33.537 } 00:11:33.537 ] 00:11:33.537 }' 00:11:33.537 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.537 12:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.816 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:33.816 12:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:33.816 [2024-09-30 12:28:45.679742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.772 "name": "raid_bdev1", 00:11:34.772 "uuid": "d802bc65-af52-45ce-aa8e-a3075ad36c44", 00:11:34.772 "strip_size_kb": 64, 00:11:34.772 "state": "online", 00:11:34.772 "raid_level": "raid0", 00:11:34.772 "superblock": true, 00:11:34.772 "num_base_bdevs": 4, 00:11:34.772 "num_base_bdevs_discovered": 4, 00:11:34.772 "num_base_bdevs_operational": 4, 00:11:34.772 "base_bdevs_list": [ 00:11:34.772 { 00:11:34.772 "name": "BaseBdev1", 00:11:34.772 "uuid": "50d4989d-abca-5475-8078-f20da7c116c2", 00:11:34.772 "is_configured": true, 00:11:34.772 "data_offset": 2048, 00:11:34.772 "data_size": 63488 00:11:34.772 }, 00:11:34.772 { 00:11:34.772 "name": "BaseBdev2", 00:11:34.772 "uuid": "9350845a-452b-5243-8d07-0ecf7cc0e82d", 00:11:34.772 "is_configured": true, 00:11:34.772 "data_offset": 2048, 00:11:34.772 "data_size": 63488 00:11:34.772 }, 00:11:34.772 { 00:11:34.772 "name": "BaseBdev3", 00:11:34.772 "uuid": "45a444f4-cec8-50e2-bd8a-df0637125ed9", 00:11:34.772 "is_configured": true, 00:11:34.772 "data_offset": 2048, 00:11:34.772 "data_size": 63488 00:11:34.772 }, 00:11:34.772 { 00:11:34.772 "name": "BaseBdev4", 00:11:34.772 "uuid": "4c453889-303a-56f4-8392-8f972464a4cc", 00:11:34.772 "is_configured": true, 00:11:34.772 "data_offset": 2048, 00:11:34.772 "data_size": 63488 00:11:34.772 } 00:11:34.772 ] 00:11:34.772 }' 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.772 12:28:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.342 [2024-09-30 12:28:47.036049] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.342 [2024-09-30 12:28:47.036185] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.342 [2024-09-30 12:28:47.038821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.342 [2024-09-30 12:28:47.038925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.342 [2024-09-30 12:28:47.038990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.342 [2024-09-30 12:28:47.039036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:35.342 { 00:11:35.342 "results": [ 00:11:35.342 { 00:11:35.342 "job": "raid_bdev1", 00:11:35.342 "core_mask": "0x1", 00:11:35.342 "workload": "randrw", 00:11:35.342 "percentage": 50, 00:11:35.342 "status": "finished", 00:11:35.342 "queue_depth": 1, 00:11:35.342 "io_size": 131072, 00:11:35.342 "runtime": 1.356952, 00:11:35.342 "iops": 14281.271555662986, 00:11:35.342 "mibps": 1785.1589444578733, 00:11:35.342 "io_failed": 1, 00:11:35.342 "io_timeout": 0, 00:11:35.342 "avg_latency_us": 98.86309155884831, 00:11:35.342 "min_latency_us": 24.817467248908297, 00:11:35.342 "max_latency_us": 1330.7528384279476 00:11:35.342 } 00:11:35.342 ], 00:11:35.342 "core_count": 1 00:11:35.342 } 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70861 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 70861 ']' 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 70861 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70861 00:11:35.342 killing process with pid 70861 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70861' 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 70861 00:11:35.342 [2024-09-30 12:28:47.075147] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.342 12:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 70861 00:11:35.602 [2024-09-30 12:28:47.414435] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.984 12:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qUcgBfiWYa 00:11:36.984 12:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:36.984 12:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:36.984 12:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:36.984 12:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:36.984 12:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.984 12:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:36.984 12:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:36.984 00:11:36.984 real 0m4.869s 00:11:36.984 user 0m5.532s 00:11:36.984 sys 0m0.682s 00:11:36.984 12:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.984 12:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.984 ************************************ 00:11:36.984 END TEST raid_read_error_test 00:11:36.984 ************************************ 00:11:36.984 12:28:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:36.984 12:28:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:36.984 12:28:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.984 12:28:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:37.244 ************************************ 00:11:37.245 START TEST raid_write_error_test 00:11:37.245 ************************************ 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Y5jtGuPmBV 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71007 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71007 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71007 ']' 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:37.245 12:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.245 [2024-09-30 12:28:48.988618] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:37.245 [2024-09-30 12:28:48.988803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71007 ] 00:11:37.505 [2024-09-30 12:28:49.151972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.505 [2024-09-30 12:28:49.391372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.765 [2024-09-30 12:28:49.606441] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.765 [2024-09-30 12:28:49.606555] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.025 BaseBdev1_malloc 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.025 true 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.025 [2024-09-30 12:28:49.869962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:38.025 [2024-09-30 12:28:49.870026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.025 [2024-09-30 12:28:49.870045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:38.025 [2024-09-30 12:28:49.870056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.025 [2024-09-30 12:28:49.872408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.025 [2024-09-30 12:28:49.872447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:38.025 BaseBdev1 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.025 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.286 BaseBdev2_malloc 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.286 true 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.286 [2024-09-30 12:28:49.969278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:38.286 [2024-09-30 12:28:49.969333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.286 [2024-09-30 12:28:49.969347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:38.286 [2024-09-30 12:28:49.969358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.286 [2024-09-30 12:28:49.971642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.286 [2024-09-30 12:28:49.971766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:38.286 BaseBdev2 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.286 12:28:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.286 BaseBdev3_malloc 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.286 true 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.286 [2024-09-30 12:28:50.040698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:38.286 [2024-09-30 12:28:50.040759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.286 [2024-09-30 12:28:50.040791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:38.286 [2024-09-30 12:28:50.040803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.286 [2024-09-30 12:28:50.043082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.286 [2024-09-30 12:28:50.043119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:38.286 BaseBdev3 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.286 BaseBdev4_malloc 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.286 true 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.286 [2024-09-30 12:28:50.113833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:38.286 [2024-09-30 12:28:50.113881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.286 [2024-09-30 12:28:50.113897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:38.286 [2024-09-30 12:28:50.113909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.286 [2024-09-30 12:28:50.116174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.286 [2024-09-30 12:28:50.116285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:38.286 BaseBdev4 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.286 [2024-09-30 12:28:50.125892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.286 [2024-09-30 12:28:50.127929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.286 [2024-09-30 12:28:50.128003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.286 [2024-09-30 12:28:50.128060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.286 [2024-09-30 12:28:50.128275] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:38.286 [2024-09-30 12:28:50.128290] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:38.286 [2024-09-30 12:28:50.128520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.286 [2024-09-30 12:28:50.128677] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:38.286 [2024-09-30 12:28:50.128685] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:38.286 [2024-09-30 12:28:50.128860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.286 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.287 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.287 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.287 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.287 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.546 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.546 "name": "raid_bdev1", 00:11:38.546 "uuid": "30c5bf1a-38bc-4313-8895-5213dabd3147", 00:11:38.546 "strip_size_kb": 64, 00:11:38.546 "state": "online", 00:11:38.546 "raid_level": "raid0", 00:11:38.546 "superblock": true, 00:11:38.546 "num_base_bdevs": 4, 00:11:38.546 "num_base_bdevs_discovered": 4, 00:11:38.546 "num_base_bdevs_operational": 4, 00:11:38.546 "base_bdevs_list": [ 00:11:38.546 { 00:11:38.546 "name": "BaseBdev1", 00:11:38.546 "uuid": "ff9a5d8a-bc3f-5246-bc22-6cb70fad54c5", 00:11:38.546 "is_configured": true, 00:11:38.546 "data_offset": 2048, 00:11:38.546 "data_size": 63488 00:11:38.546 }, 00:11:38.546 { 00:11:38.546 "name": "BaseBdev2", 00:11:38.546 "uuid": "0dbb5481-e745-5287-ba95-9ef1434db673", 00:11:38.546 "is_configured": true, 00:11:38.546 "data_offset": 2048, 00:11:38.546 "data_size": 63488 00:11:38.546 }, 00:11:38.546 { 00:11:38.546 "name": "BaseBdev3", 00:11:38.546 "uuid": "cc676cfe-7a7e-5bfd-8510-56a176867036", 00:11:38.546 "is_configured": true, 00:11:38.546 "data_offset": 2048, 00:11:38.546 "data_size": 63488 00:11:38.546 }, 00:11:38.546 { 00:11:38.546 "name": "BaseBdev4", 00:11:38.546 "uuid": "c4b8b3b3-8c94-5e36-bbb0-fb2b9c7f7f12", 00:11:38.546 "is_configured": true, 00:11:38.546 "data_offset": 2048, 00:11:38.546 "data_size": 63488 00:11:38.546 } 00:11:38.546 ] 00:11:38.546 }' 00:11:38.546 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.546 12:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.806 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:38.806 12:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:38.806 [2024-09-30 12:28:50.634225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:39.745 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:39.745 12:28:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.745 12:28:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.746 "name": "raid_bdev1", 00:11:39.746 "uuid": "30c5bf1a-38bc-4313-8895-5213dabd3147", 00:11:39.746 "strip_size_kb": 64, 00:11:39.746 "state": "online", 00:11:39.746 "raid_level": "raid0", 00:11:39.746 "superblock": true, 00:11:39.746 "num_base_bdevs": 4, 00:11:39.746 "num_base_bdevs_discovered": 4, 00:11:39.746 "num_base_bdevs_operational": 4, 00:11:39.746 "base_bdevs_list": [ 00:11:39.746 { 00:11:39.746 "name": "BaseBdev1", 00:11:39.746 "uuid": "ff9a5d8a-bc3f-5246-bc22-6cb70fad54c5", 00:11:39.746 "is_configured": true, 00:11:39.746 "data_offset": 2048, 00:11:39.746 "data_size": 63488 00:11:39.746 }, 00:11:39.746 { 00:11:39.746 "name": "BaseBdev2", 00:11:39.746 "uuid": "0dbb5481-e745-5287-ba95-9ef1434db673", 00:11:39.746 "is_configured": true, 00:11:39.746 "data_offset": 2048, 00:11:39.746 "data_size": 63488 00:11:39.746 }, 00:11:39.746 { 00:11:39.746 "name": "BaseBdev3", 00:11:39.746 "uuid": "cc676cfe-7a7e-5bfd-8510-56a176867036", 00:11:39.746 "is_configured": true, 00:11:39.746 "data_offset": 2048, 00:11:39.746 "data_size": 63488 00:11:39.746 }, 00:11:39.746 { 00:11:39.746 "name": "BaseBdev4", 00:11:39.746 "uuid": "c4b8b3b3-8c94-5e36-bbb0-fb2b9c7f7f12", 00:11:39.746 "is_configured": true, 00:11:39.746 "data_offset": 2048, 00:11:39.746 "data_size": 63488 00:11:39.746 } 00:11:39.746 ] 00:11:39.746 }' 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.746 12:28:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.315 12:28:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:40.315 12:28:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.315 12:28:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.315 [2024-09-30 12:28:52.002765] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.315 [2024-09-30 12:28:52.002916] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.315 [2024-09-30 12:28:52.005480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.316 [2024-09-30 12:28:52.005542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.316 [2024-09-30 12:28:52.005591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.316 [2024-09-30 12:28:52.005602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:40.316 { 00:11:40.316 "results": [ 00:11:40.316 { 00:11:40.316 "job": "raid_bdev1", 00:11:40.316 "core_mask": "0x1", 00:11:40.316 "workload": "randrw", 00:11:40.316 "percentage": 50, 00:11:40.316 "status": "finished", 00:11:40.316 "queue_depth": 1, 00:11:40.316 "io_size": 131072, 00:11:40.316 "runtime": 1.369284, 00:11:40.316 "iops": 14455.730148018965, 00:11:40.316 "mibps": 1806.9662685023707, 00:11:40.316 "io_failed": 1, 00:11:40.316 "io_timeout": 0, 00:11:40.316 "avg_latency_us": 97.58708614830395, 00:11:40.316 "min_latency_us": 24.593886462882097, 00:11:40.316 "max_latency_us": 1337.907423580786 00:11:40.316 } 00:11:40.316 ], 00:11:40.316 "core_count": 1 00:11:40.316 } 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71007 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71007 ']' 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71007 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71007 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71007' 00:11:40.316 killing process with pid 71007 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71007 00:11:40.316 [2024-09-30 12:28:52.053237] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.316 12:28:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71007 00:11:40.575 [2024-09-30 12:28:52.391213] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.957 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Y5jtGuPmBV 00:11:41.957 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:41.957 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:41.957 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:41.957 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:41.957 ************************************ 00:11:41.957 END TEST raid_write_error_test 00:11:41.957 ************************************ 00:11:41.957 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:41.957 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:41.957 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:41.957 00:11:41.957 real 0m4.908s 00:11:41.957 user 0m5.539s 00:11:41.957 sys 0m0.735s 00:11:41.957 12:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.957 12:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.217 12:28:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:42.217 12:28:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:42.217 12:28:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:42.217 12:28:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.217 12:28:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:42.217 ************************************ 00:11:42.217 START TEST raid_state_function_test 00:11:42.217 ************************************ 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.217 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:42.218 Process raid pid: 71156 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71156 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71156' 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71156 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71156 ']' 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:42.218 12:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.218 [2024-09-30 12:28:53.972049] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:42.218 [2024-09-30 12:28:53.972237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.478 [2024-09-30 12:28:54.141351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.737 [2024-09-30 12:28:54.389037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.737 [2024-09-30 12:28:54.621115] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.737 [2024-09-30 12:28:54.621232] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.997 [2024-09-30 12:28:54.800030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.997 [2024-09-30 12:28:54.800150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.997 [2024-09-30 12:28:54.800184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.997 [2024-09-30 12:28:54.800209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.997 [2024-09-30 12:28:54.800269] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.997 [2024-09-30 12:28:54.800294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.997 [2024-09-30 12:28:54.800353] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:42.997 [2024-09-30 12:28:54.800376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.997 "name": "Existed_Raid", 00:11:42.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.997 "strip_size_kb": 64, 00:11:42.997 "state": "configuring", 00:11:42.997 "raid_level": "concat", 00:11:42.997 "superblock": false, 00:11:42.997 "num_base_bdevs": 4, 00:11:42.997 "num_base_bdevs_discovered": 0, 00:11:42.997 "num_base_bdevs_operational": 4, 00:11:42.997 "base_bdevs_list": [ 00:11:42.997 { 00:11:42.997 "name": "BaseBdev1", 00:11:42.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.997 "is_configured": false, 00:11:42.997 "data_offset": 0, 00:11:42.997 "data_size": 0 00:11:42.997 }, 00:11:42.997 { 00:11:42.997 "name": "BaseBdev2", 00:11:42.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.997 "is_configured": false, 00:11:42.997 "data_offset": 0, 00:11:42.997 "data_size": 0 00:11:42.997 }, 00:11:42.997 { 00:11:42.997 "name": "BaseBdev3", 00:11:42.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.997 "is_configured": false, 00:11:42.997 "data_offset": 0, 00:11:42.997 "data_size": 0 00:11:42.997 }, 00:11:42.997 { 00:11:42.997 "name": "BaseBdev4", 00:11:42.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.997 "is_configured": false, 00:11:42.997 "data_offset": 0, 00:11:42.997 "data_size": 0 00:11:42.997 } 00:11:42.997 ] 00:11:42.997 }' 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.997 12:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.567 [2024-09-30 12:28:55.227199] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.567 [2024-09-30 12:28:55.227313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.567 [2024-09-30 12:28:55.239200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.567 [2024-09-30 12:28:55.239279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.567 [2024-09-30 12:28:55.239304] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.567 [2024-09-30 12:28:55.239332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.567 [2024-09-30 12:28:55.239366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.567 [2024-09-30 12:28:55.239403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.567 [2024-09-30 12:28:55.239421] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:43.567 [2024-09-30 12:28:55.239441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.567 [2024-09-30 12:28:55.329081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:43.567 BaseBdev1 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.567 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.567 [ 00:11:43.567 { 00:11:43.567 "name": "BaseBdev1", 00:11:43.567 "aliases": [ 00:11:43.567 "0220f240-3efb-4390-b3b8-ca576da4efd9" 00:11:43.567 ], 00:11:43.567 "product_name": "Malloc disk", 00:11:43.567 "block_size": 512, 00:11:43.567 "num_blocks": 65536, 00:11:43.567 "uuid": "0220f240-3efb-4390-b3b8-ca576da4efd9", 00:11:43.567 "assigned_rate_limits": { 00:11:43.567 "rw_ios_per_sec": 0, 00:11:43.567 "rw_mbytes_per_sec": 0, 00:11:43.567 "r_mbytes_per_sec": 0, 00:11:43.567 "w_mbytes_per_sec": 0 00:11:43.567 }, 00:11:43.567 "claimed": true, 00:11:43.567 "claim_type": "exclusive_write", 00:11:43.567 "zoned": false, 00:11:43.567 "supported_io_types": { 00:11:43.567 "read": true, 00:11:43.567 "write": true, 00:11:43.567 "unmap": true, 00:11:43.567 "flush": true, 00:11:43.567 "reset": true, 00:11:43.567 "nvme_admin": false, 00:11:43.567 "nvme_io": false, 00:11:43.567 "nvme_io_md": false, 00:11:43.567 "write_zeroes": true, 00:11:43.567 "zcopy": true, 00:11:43.567 "get_zone_info": false, 00:11:43.567 "zone_management": false, 00:11:43.567 "zone_append": false, 00:11:43.567 "compare": false, 00:11:43.567 "compare_and_write": false, 00:11:43.567 "abort": true, 00:11:43.567 "seek_hole": false, 00:11:43.567 "seek_data": false, 00:11:43.567 "copy": true, 00:11:43.567 "nvme_iov_md": false 00:11:43.567 }, 00:11:43.567 "memory_domains": [ 00:11:43.567 { 00:11:43.568 "dma_device_id": "system", 00:11:43.568 "dma_device_type": 1 00:11:43.568 }, 00:11:43.568 { 00:11:43.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.568 "dma_device_type": 2 00:11:43.568 } 00:11:43.568 ], 00:11:43.568 "driver_specific": {} 00:11:43.568 } 00:11:43.568 ] 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.568 "name": "Existed_Raid", 00:11:43.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.568 "strip_size_kb": 64, 00:11:43.568 "state": "configuring", 00:11:43.568 "raid_level": "concat", 00:11:43.568 "superblock": false, 00:11:43.568 "num_base_bdevs": 4, 00:11:43.568 "num_base_bdevs_discovered": 1, 00:11:43.568 "num_base_bdevs_operational": 4, 00:11:43.568 "base_bdevs_list": [ 00:11:43.568 { 00:11:43.568 "name": "BaseBdev1", 00:11:43.568 "uuid": "0220f240-3efb-4390-b3b8-ca576da4efd9", 00:11:43.568 "is_configured": true, 00:11:43.568 "data_offset": 0, 00:11:43.568 "data_size": 65536 00:11:43.568 }, 00:11:43.568 { 00:11:43.568 "name": "BaseBdev2", 00:11:43.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.568 "is_configured": false, 00:11:43.568 "data_offset": 0, 00:11:43.568 "data_size": 0 00:11:43.568 }, 00:11:43.568 { 00:11:43.568 "name": "BaseBdev3", 00:11:43.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.568 "is_configured": false, 00:11:43.568 "data_offset": 0, 00:11:43.568 "data_size": 0 00:11:43.568 }, 00:11:43.568 { 00:11:43.568 "name": "BaseBdev4", 00:11:43.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.568 "is_configured": false, 00:11:43.568 "data_offset": 0, 00:11:43.568 "data_size": 0 00:11:43.568 } 00:11:43.568 ] 00:11:43.568 }' 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.568 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.137 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.137 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.137 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.137 [2024-09-30 12:28:55.768349] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.137 [2024-09-30 12:28:55.768399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:44.137 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.137 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.138 [2024-09-30 12:28:55.780383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.138 [2024-09-30 12:28:55.782474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:44.138 [2024-09-30 12:28:55.782561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:44.138 [2024-09-30 12:28:55.782575] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:44.138 [2024-09-30 12:28:55.782585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:44.138 [2024-09-30 12:28:55.782592] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:44.138 [2024-09-30 12:28:55.782601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.138 "name": "Existed_Raid", 00:11:44.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.138 "strip_size_kb": 64, 00:11:44.138 "state": "configuring", 00:11:44.138 "raid_level": "concat", 00:11:44.138 "superblock": false, 00:11:44.138 "num_base_bdevs": 4, 00:11:44.138 "num_base_bdevs_discovered": 1, 00:11:44.138 "num_base_bdevs_operational": 4, 00:11:44.138 "base_bdevs_list": [ 00:11:44.138 { 00:11:44.138 "name": "BaseBdev1", 00:11:44.138 "uuid": "0220f240-3efb-4390-b3b8-ca576da4efd9", 00:11:44.138 "is_configured": true, 00:11:44.138 "data_offset": 0, 00:11:44.138 "data_size": 65536 00:11:44.138 }, 00:11:44.138 { 00:11:44.138 "name": "BaseBdev2", 00:11:44.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.138 "is_configured": false, 00:11:44.138 "data_offset": 0, 00:11:44.138 "data_size": 0 00:11:44.138 }, 00:11:44.138 { 00:11:44.138 "name": "BaseBdev3", 00:11:44.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.138 "is_configured": false, 00:11:44.138 "data_offset": 0, 00:11:44.138 "data_size": 0 00:11:44.138 }, 00:11:44.138 { 00:11:44.138 "name": "BaseBdev4", 00:11:44.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.138 "is_configured": false, 00:11:44.138 "data_offset": 0, 00:11:44.138 "data_size": 0 00:11:44.138 } 00:11:44.138 ] 00:11:44.138 }' 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.138 12:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.398 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:44.398 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.398 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.398 [2024-09-30 12:28:56.289804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.398 BaseBdev2 00:11:44.398 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.398 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:44.398 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:44.398 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.398 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:44.398 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.398 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.658 [ 00:11:44.658 { 00:11:44.658 "name": "BaseBdev2", 00:11:44.658 "aliases": [ 00:11:44.658 "d9eece73-74e0-41f4-87f5-2dd4f7493e8d" 00:11:44.658 ], 00:11:44.658 "product_name": "Malloc disk", 00:11:44.658 "block_size": 512, 00:11:44.658 "num_blocks": 65536, 00:11:44.658 "uuid": "d9eece73-74e0-41f4-87f5-2dd4f7493e8d", 00:11:44.658 "assigned_rate_limits": { 00:11:44.658 "rw_ios_per_sec": 0, 00:11:44.658 "rw_mbytes_per_sec": 0, 00:11:44.658 "r_mbytes_per_sec": 0, 00:11:44.658 "w_mbytes_per_sec": 0 00:11:44.658 }, 00:11:44.658 "claimed": true, 00:11:44.658 "claim_type": "exclusive_write", 00:11:44.658 "zoned": false, 00:11:44.658 "supported_io_types": { 00:11:44.658 "read": true, 00:11:44.658 "write": true, 00:11:44.658 "unmap": true, 00:11:44.658 "flush": true, 00:11:44.658 "reset": true, 00:11:44.658 "nvme_admin": false, 00:11:44.658 "nvme_io": false, 00:11:44.658 "nvme_io_md": false, 00:11:44.658 "write_zeroes": true, 00:11:44.658 "zcopy": true, 00:11:44.658 "get_zone_info": false, 00:11:44.658 "zone_management": false, 00:11:44.658 "zone_append": false, 00:11:44.658 "compare": false, 00:11:44.658 "compare_and_write": false, 00:11:44.658 "abort": true, 00:11:44.658 "seek_hole": false, 00:11:44.658 "seek_data": false, 00:11:44.658 "copy": true, 00:11:44.658 "nvme_iov_md": false 00:11:44.658 }, 00:11:44.658 "memory_domains": [ 00:11:44.658 { 00:11:44.658 "dma_device_id": "system", 00:11:44.658 "dma_device_type": 1 00:11:44.658 }, 00:11:44.658 { 00:11:44.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.658 "dma_device_type": 2 00:11:44.658 } 00:11:44.658 ], 00:11:44.658 "driver_specific": {} 00:11:44.658 } 00:11:44.658 ] 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.658 "name": "Existed_Raid", 00:11:44.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.658 "strip_size_kb": 64, 00:11:44.658 "state": "configuring", 00:11:44.658 "raid_level": "concat", 00:11:44.658 "superblock": false, 00:11:44.658 "num_base_bdevs": 4, 00:11:44.658 "num_base_bdevs_discovered": 2, 00:11:44.658 "num_base_bdevs_operational": 4, 00:11:44.658 "base_bdevs_list": [ 00:11:44.658 { 00:11:44.658 "name": "BaseBdev1", 00:11:44.658 "uuid": "0220f240-3efb-4390-b3b8-ca576da4efd9", 00:11:44.658 "is_configured": true, 00:11:44.658 "data_offset": 0, 00:11:44.658 "data_size": 65536 00:11:44.658 }, 00:11:44.658 { 00:11:44.658 "name": "BaseBdev2", 00:11:44.658 "uuid": "d9eece73-74e0-41f4-87f5-2dd4f7493e8d", 00:11:44.658 "is_configured": true, 00:11:44.658 "data_offset": 0, 00:11:44.658 "data_size": 65536 00:11:44.658 }, 00:11:44.658 { 00:11:44.658 "name": "BaseBdev3", 00:11:44.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.658 "is_configured": false, 00:11:44.658 "data_offset": 0, 00:11:44.658 "data_size": 0 00:11:44.658 }, 00:11:44.658 { 00:11:44.658 "name": "BaseBdev4", 00:11:44.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.658 "is_configured": false, 00:11:44.658 "data_offset": 0, 00:11:44.658 "data_size": 0 00:11:44.658 } 00:11:44.658 ] 00:11:44.658 }' 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.658 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.918 [2024-09-30 12:28:56.795723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.918 BaseBdev3 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.918 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.178 [ 00:11:45.178 { 00:11:45.178 "name": "BaseBdev3", 00:11:45.178 "aliases": [ 00:11:45.178 "07d24c57-278c-44be-a1ee-d04554114f3d" 00:11:45.178 ], 00:11:45.178 "product_name": "Malloc disk", 00:11:45.178 "block_size": 512, 00:11:45.178 "num_blocks": 65536, 00:11:45.178 "uuid": "07d24c57-278c-44be-a1ee-d04554114f3d", 00:11:45.178 "assigned_rate_limits": { 00:11:45.178 "rw_ios_per_sec": 0, 00:11:45.178 "rw_mbytes_per_sec": 0, 00:11:45.178 "r_mbytes_per_sec": 0, 00:11:45.178 "w_mbytes_per_sec": 0 00:11:45.178 }, 00:11:45.178 "claimed": true, 00:11:45.178 "claim_type": "exclusive_write", 00:11:45.178 "zoned": false, 00:11:45.178 "supported_io_types": { 00:11:45.178 "read": true, 00:11:45.178 "write": true, 00:11:45.178 "unmap": true, 00:11:45.178 "flush": true, 00:11:45.178 "reset": true, 00:11:45.178 "nvme_admin": false, 00:11:45.178 "nvme_io": false, 00:11:45.178 "nvme_io_md": false, 00:11:45.178 "write_zeroes": true, 00:11:45.178 "zcopy": true, 00:11:45.178 "get_zone_info": false, 00:11:45.178 "zone_management": false, 00:11:45.178 "zone_append": false, 00:11:45.178 "compare": false, 00:11:45.178 "compare_and_write": false, 00:11:45.178 "abort": true, 00:11:45.178 "seek_hole": false, 00:11:45.178 "seek_data": false, 00:11:45.178 "copy": true, 00:11:45.178 "nvme_iov_md": false 00:11:45.178 }, 00:11:45.178 "memory_domains": [ 00:11:45.178 { 00:11:45.178 "dma_device_id": "system", 00:11:45.178 "dma_device_type": 1 00:11:45.178 }, 00:11:45.178 { 00:11:45.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.178 "dma_device_type": 2 00:11:45.178 } 00:11:45.178 ], 00:11:45.178 "driver_specific": {} 00:11:45.178 } 00:11:45.178 ] 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.178 "name": "Existed_Raid", 00:11:45.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.178 "strip_size_kb": 64, 00:11:45.178 "state": "configuring", 00:11:45.178 "raid_level": "concat", 00:11:45.178 "superblock": false, 00:11:45.178 "num_base_bdevs": 4, 00:11:45.178 "num_base_bdevs_discovered": 3, 00:11:45.178 "num_base_bdevs_operational": 4, 00:11:45.178 "base_bdevs_list": [ 00:11:45.178 { 00:11:45.178 "name": "BaseBdev1", 00:11:45.178 "uuid": "0220f240-3efb-4390-b3b8-ca576da4efd9", 00:11:45.178 "is_configured": true, 00:11:45.178 "data_offset": 0, 00:11:45.178 "data_size": 65536 00:11:45.178 }, 00:11:45.178 { 00:11:45.178 "name": "BaseBdev2", 00:11:45.178 "uuid": "d9eece73-74e0-41f4-87f5-2dd4f7493e8d", 00:11:45.178 "is_configured": true, 00:11:45.178 "data_offset": 0, 00:11:45.178 "data_size": 65536 00:11:45.178 }, 00:11:45.178 { 00:11:45.178 "name": "BaseBdev3", 00:11:45.178 "uuid": "07d24c57-278c-44be-a1ee-d04554114f3d", 00:11:45.178 "is_configured": true, 00:11:45.178 "data_offset": 0, 00:11:45.178 "data_size": 65536 00:11:45.178 }, 00:11:45.178 { 00:11:45.178 "name": "BaseBdev4", 00:11:45.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.178 "is_configured": false, 00:11:45.178 "data_offset": 0, 00:11:45.178 "data_size": 0 00:11:45.178 } 00:11:45.178 ] 00:11:45.178 }' 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.178 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.438 [2024-09-30 12:28:57.310859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:45.438 [2024-09-30 12:28:57.310995] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:45.438 [2024-09-30 12:28:57.311007] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:45.438 [2024-09-30 12:28:57.311315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:45.438 [2024-09-30 12:28:57.311531] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:45.438 [2024-09-30 12:28:57.311544] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:45.438 [2024-09-30 12:28:57.311844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.438 BaseBdev4 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.438 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.697 [ 00:11:45.697 { 00:11:45.697 "name": "BaseBdev4", 00:11:45.697 "aliases": [ 00:11:45.697 "dd5418a0-5611-4791-a9f7-b91e477de411" 00:11:45.697 ], 00:11:45.697 "product_name": "Malloc disk", 00:11:45.697 "block_size": 512, 00:11:45.697 "num_blocks": 65536, 00:11:45.697 "uuid": "dd5418a0-5611-4791-a9f7-b91e477de411", 00:11:45.697 "assigned_rate_limits": { 00:11:45.697 "rw_ios_per_sec": 0, 00:11:45.697 "rw_mbytes_per_sec": 0, 00:11:45.697 "r_mbytes_per_sec": 0, 00:11:45.697 "w_mbytes_per_sec": 0 00:11:45.697 }, 00:11:45.697 "claimed": true, 00:11:45.697 "claim_type": "exclusive_write", 00:11:45.697 "zoned": false, 00:11:45.697 "supported_io_types": { 00:11:45.697 "read": true, 00:11:45.697 "write": true, 00:11:45.697 "unmap": true, 00:11:45.697 "flush": true, 00:11:45.697 "reset": true, 00:11:45.697 "nvme_admin": false, 00:11:45.697 "nvme_io": false, 00:11:45.697 "nvme_io_md": false, 00:11:45.697 "write_zeroes": true, 00:11:45.697 "zcopy": true, 00:11:45.697 "get_zone_info": false, 00:11:45.697 "zone_management": false, 00:11:45.697 "zone_append": false, 00:11:45.697 "compare": false, 00:11:45.697 "compare_and_write": false, 00:11:45.697 "abort": true, 00:11:45.697 "seek_hole": false, 00:11:45.697 "seek_data": false, 00:11:45.697 "copy": true, 00:11:45.697 "nvme_iov_md": false 00:11:45.697 }, 00:11:45.697 "memory_domains": [ 00:11:45.697 { 00:11:45.697 "dma_device_id": "system", 00:11:45.697 "dma_device_type": 1 00:11:45.697 }, 00:11:45.697 { 00:11:45.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.697 "dma_device_type": 2 00:11:45.697 } 00:11:45.697 ], 00:11:45.697 "driver_specific": {} 00:11:45.697 } 00:11:45.697 ] 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.697 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.698 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.698 "name": "Existed_Raid", 00:11:45.698 "uuid": "b1875ce7-1006-4bba-ada7-a16af439abae", 00:11:45.698 "strip_size_kb": 64, 00:11:45.698 "state": "online", 00:11:45.698 "raid_level": "concat", 00:11:45.698 "superblock": false, 00:11:45.698 "num_base_bdevs": 4, 00:11:45.698 "num_base_bdevs_discovered": 4, 00:11:45.698 "num_base_bdevs_operational": 4, 00:11:45.698 "base_bdevs_list": [ 00:11:45.698 { 00:11:45.698 "name": "BaseBdev1", 00:11:45.698 "uuid": "0220f240-3efb-4390-b3b8-ca576da4efd9", 00:11:45.698 "is_configured": true, 00:11:45.698 "data_offset": 0, 00:11:45.698 "data_size": 65536 00:11:45.698 }, 00:11:45.698 { 00:11:45.698 "name": "BaseBdev2", 00:11:45.698 "uuid": "d9eece73-74e0-41f4-87f5-2dd4f7493e8d", 00:11:45.698 "is_configured": true, 00:11:45.698 "data_offset": 0, 00:11:45.698 "data_size": 65536 00:11:45.698 }, 00:11:45.698 { 00:11:45.698 "name": "BaseBdev3", 00:11:45.698 "uuid": "07d24c57-278c-44be-a1ee-d04554114f3d", 00:11:45.698 "is_configured": true, 00:11:45.698 "data_offset": 0, 00:11:45.698 "data_size": 65536 00:11:45.698 }, 00:11:45.698 { 00:11:45.698 "name": "BaseBdev4", 00:11:45.698 "uuid": "dd5418a0-5611-4791-a9f7-b91e477de411", 00:11:45.698 "is_configured": true, 00:11:45.698 "data_offset": 0, 00:11:45.698 "data_size": 65536 00:11:45.698 } 00:11:45.698 ] 00:11:45.698 }' 00:11:45.698 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.698 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.957 [2024-09-30 12:28:57.778377] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.957 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.957 "name": "Existed_Raid", 00:11:45.957 "aliases": [ 00:11:45.957 "b1875ce7-1006-4bba-ada7-a16af439abae" 00:11:45.957 ], 00:11:45.957 "product_name": "Raid Volume", 00:11:45.957 "block_size": 512, 00:11:45.957 "num_blocks": 262144, 00:11:45.957 "uuid": "b1875ce7-1006-4bba-ada7-a16af439abae", 00:11:45.957 "assigned_rate_limits": { 00:11:45.957 "rw_ios_per_sec": 0, 00:11:45.957 "rw_mbytes_per_sec": 0, 00:11:45.957 "r_mbytes_per_sec": 0, 00:11:45.957 "w_mbytes_per_sec": 0 00:11:45.957 }, 00:11:45.957 "claimed": false, 00:11:45.957 "zoned": false, 00:11:45.957 "supported_io_types": { 00:11:45.957 "read": true, 00:11:45.957 "write": true, 00:11:45.957 "unmap": true, 00:11:45.957 "flush": true, 00:11:45.957 "reset": true, 00:11:45.957 "nvme_admin": false, 00:11:45.957 "nvme_io": false, 00:11:45.957 "nvme_io_md": false, 00:11:45.957 "write_zeroes": true, 00:11:45.957 "zcopy": false, 00:11:45.957 "get_zone_info": false, 00:11:45.957 "zone_management": false, 00:11:45.957 "zone_append": false, 00:11:45.957 "compare": false, 00:11:45.957 "compare_and_write": false, 00:11:45.957 "abort": false, 00:11:45.957 "seek_hole": false, 00:11:45.957 "seek_data": false, 00:11:45.957 "copy": false, 00:11:45.957 "nvme_iov_md": false 00:11:45.957 }, 00:11:45.957 "memory_domains": [ 00:11:45.957 { 00:11:45.957 "dma_device_id": "system", 00:11:45.957 "dma_device_type": 1 00:11:45.957 }, 00:11:45.957 { 00:11:45.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.957 "dma_device_type": 2 00:11:45.958 }, 00:11:45.958 { 00:11:45.958 "dma_device_id": "system", 00:11:45.958 "dma_device_type": 1 00:11:45.958 }, 00:11:45.958 { 00:11:45.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.958 "dma_device_type": 2 00:11:45.958 }, 00:11:45.958 { 00:11:45.958 "dma_device_id": "system", 00:11:45.958 "dma_device_type": 1 00:11:45.958 }, 00:11:45.958 { 00:11:45.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.958 "dma_device_type": 2 00:11:45.958 }, 00:11:45.958 { 00:11:45.958 "dma_device_id": "system", 00:11:45.958 "dma_device_type": 1 00:11:45.958 }, 00:11:45.958 { 00:11:45.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.958 "dma_device_type": 2 00:11:45.958 } 00:11:45.958 ], 00:11:45.958 "driver_specific": { 00:11:45.958 "raid": { 00:11:45.958 "uuid": "b1875ce7-1006-4bba-ada7-a16af439abae", 00:11:45.958 "strip_size_kb": 64, 00:11:45.958 "state": "online", 00:11:45.958 "raid_level": "concat", 00:11:45.958 "superblock": false, 00:11:45.958 "num_base_bdevs": 4, 00:11:45.958 "num_base_bdevs_discovered": 4, 00:11:45.958 "num_base_bdevs_operational": 4, 00:11:45.958 "base_bdevs_list": [ 00:11:45.958 { 00:11:45.958 "name": "BaseBdev1", 00:11:45.958 "uuid": "0220f240-3efb-4390-b3b8-ca576da4efd9", 00:11:45.958 "is_configured": true, 00:11:45.958 "data_offset": 0, 00:11:45.958 "data_size": 65536 00:11:45.958 }, 00:11:45.958 { 00:11:45.958 "name": "BaseBdev2", 00:11:45.958 "uuid": "d9eece73-74e0-41f4-87f5-2dd4f7493e8d", 00:11:45.958 "is_configured": true, 00:11:45.958 "data_offset": 0, 00:11:45.958 "data_size": 65536 00:11:45.958 }, 00:11:45.958 { 00:11:45.958 "name": "BaseBdev3", 00:11:45.958 "uuid": "07d24c57-278c-44be-a1ee-d04554114f3d", 00:11:45.958 "is_configured": true, 00:11:45.958 "data_offset": 0, 00:11:45.958 "data_size": 65536 00:11:45.958 }, 00:11:45.958 { 00:11:45.958 "name": "BaseBdev4", 00:11:45.958 "uuid": "dd5418a0-5611-4791-a9f7-b91e477de411", 00:11:45.958 "is_configured": true, 00:11:45.958 "data_offset": 0, 00:11:45.958 "data_size": 65536 00:11:45.958 } 00:11:45.958 ] 00:11:45.958 } 00:11:45.958 } 00:11:45.958 }' 00:11:45.958 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:46.217 BaseBdev2 00:11:46.217 BaseBdev3 00:11:46.217 BaseBdev4' 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.217 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.217 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.217 [2024-09-30 12:28:58.077587] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.217 [2024-09-30 12:28:58.077659] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.217 [2024-09-30 12:28:58.077732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.476 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.477 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.477 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.477 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.477 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.477 "name": "Existed_Raid", 00:11:46.477 "uuid": "b1875ce7-1006-4bba-ada7-a16af439abae", 00:11:46.477 "strip_size_kb": 64, 00:11:46.477 "state": "offline", 00:11:46.477 "raid_level": "concat", 00:11:46.477 "superblock": false, 00:11:46.477 "num_base_bdevs": 4, 00:11:46.477 "num_base_bdevs_discovered": 3, 00:11:46.477 "num_base_bdevs_operational": 3, 00:11:46.477 "base_bdevs_list": [ 00:11:46.477 { 00:11:46.477 "name": null, 00:11:46.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.477 "is_configured": false, 00:11:46.477 "data_offset": 0, 00:11:46.477 "data_size": 65536 00:11:46.477 }, 00:11:46.477 { 00:11:46.477 "name": "BaseBdev2", 00:11:46.477 "uuid": "d9eece73-74e0-41f4-87f5-2dd4f7493e8d", 00:11:46.477 "is_configured": true, 00:11:46.477 "data_offset": 0, 00:11:46.477 "data_size": 65536 00:11:46.477 }, 00:11:46.477 { 00:11:46.477 "name": "BaseBdev3", 00:11:46.477 "uuid": "07d24c57-278c-44be-a1ee-d04554114f3d", 00:11:46.477 "is_configured": true, 00:11:46.477 "data_offset": 0, 00:11:46.477 "data_size": 65536 00:11:46.477 }, 00:11:46.477 { 00:11:46.477 "name": "BaseBdev4", 00:11:46.477 "uuid": "dd5418a0-5611-4791-a9f7-b91e477de411", 00:11:46.477 "is_configured": true, 00:11:46.477 "data_offset": 0, 00:11:46.477 "data_size": 65536 00:11:46.477 } 00:11:46.477 ] 00:11:46.477 }' 00:11:46.477 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.477 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.045 [2024-09-30 12:28:58.665342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.045 [2024-09-30 12:28:58.826350] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.045 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.330 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.330 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:47.330 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.330 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:47.330 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.330 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.330 [2024-09-30 12:28:58.986857] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:47.330 [2024-09-30 12:28:58.986958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.330 BaseBdev2 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.330 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.330 [ 00:11:47.330 { 00:11:47.330 "name": "BaseBdev2", 00:11:47.330 "aliases": [ 00:11:47.330 "73c72bb5-b808-4b59-81fd-c5db4cf8c547" 00:11:47.330 ], 00:11:47.330 "product_name": "Malloc disk", 00:11:47.330 "block_size": 512, 00:11:47.330 "num_blocks": 65536, 00:11:47.330 "uuid": "73c72bb5-b808-4b59-81fd-c5db4cf8c547", 00:11:47.330 "assigned_rate_limits": { 00:11:47.330 "rw_ios_per_sec": 0, 00:11:47.330 "rw_mbytes_per_sec": 0, 00:11:47.330 "r_mbytes_per_sec": 0, 00:11:47.330 "w_mbytes_per_sec": 0 00:11:47.330 }, 00:11:47.330 "claimed": false, 00:11:47.330 "zoned": false, 00:11:47.330 "supported_io_types": { 00:11:47.330 "read": true, 00:11:47.330 "write": true, 00:11:47.330 "unmap": true, 00:11:47.330 "flush": true, 00:11:47.330 "reset": true, 00:11:47.591 "nvme_admin": false, 00:11:47.592 "nvme_io": false, 00:11:47.592 "nvme_io_md": false, 00:11:47.592 "write_zeroes": true, 00:11:47.592 "zcopy": true, 00:11:47.592 "get_zone_info": false, 00:11:47.592 "zone_management": false, 00:11:47.592 "zone_append": false, 00:11:47.592 "compare": false, 00:11:47.592 "compare_and_write": false, 00:11:47.592 "abort": true, 00:11:47.592 "seek_hole": false, 00:11:47.592 "seek_data": false, 00:11:47.592 "copy": true, 00:11:47.592 "nvme_iov_md": false 00:11:47.592 }, 00:11:47.592 "memory_domains": [ 00:11:47.592 { 00:11:47.592 "dma_device_id": "system", 00:11:47.592 "dma_device_type": 1 00:11:47.592 }, 00:11:47.592 { 00:11:47.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.592 "dma_device_type": 2 00:11:47.592 } 00:11:47.592 ], 00:11:47.592 "driver_specific": {} 00:11:47.592 } 00:11:47.592 ] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.592 BaseBdev3 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.592 [ 00:11:47.592 { 00:11:47.592 "name": "BaseBdev3", 00:11:47.592 "aliases": [ 00:11:47.592 "e36d81ca-0df8-47e6-8fd2-a666c66b12de" 00:11:47.592 ], 00:11:47.592 "product_name": "Malloc disk", 00:11:47.592 "block_size": 512, 00:11:47.592 "num_blocks": 65536, 00:11:47.592 "uuid": "e36d81ca-0df8-47e6-8fd2-a666c66b12de", 00:11:47.592 "assigned_rate_limits": { 00:11:47.592 "rw_ios_per_sec": 0, 00:11:47.592 "rw_mbytes_per_sec": 0, 00:11:47.592 "r_mbytes_per_sec": 0, 00:11:47.592 "w_mbytes_per_sec": 0 00:11:47.592 }, 00:11:47.592 "claimed": false, 00:11:47.592 "zoned": false, 00:11:47.592 "supported_io_types": { 00:11:47.592 "read": true, 00:11:47.592 "write": true, 00:11:47.592 "unmap": true, 00:11:47.592 "flush": true, 00:11:47.592 "reset": true, 00:11:47.592 "nvme_admin": false, 00:11:47.592 "nvme_io": false, 00:11:47.592 "nvme_io_md": false, 00:11:47.592 "write_zeroes": true, 00:11:47.592 "zcopy": true, 00:11:47.592 "get_zone_info": false, 00:11:47.592 "zone_management": false, 00:11:47.592 "zone_append": false, 00:11:47.592 "compare": false, 00:11:47.592 "compare_and_write": false, 00:11:47.592 "abort": true, 00:11:47.592 "seek_hole": false, 00:11:47.592 "seek_data": false, 00:11:47.592 "copy": true, 00:11:47.592 "nvme_iov_md": false 00:11:47.592 }, 00:11:47.592 "memory_domains": [ 00:11:47.592 { 00:11:47.592 "dma_device_id": "system", 00:11:47.592 "dma_device_type": 1 00:11:47.592 }, 00:11:47.592 { 00:11:47.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.592 "dma_device_type": 2 00:11:47.592 } 00:11:47.592 ], 00:11:47.592 "driver_specific": {} 00:11:47.592 } 00:11:47.592 ] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.592 BaseBdev4 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.592 [ 00:11:47.592 { 00:11:47.592 "name": "BaseBdev4", 00:11:47.592 "aliases": [ 00:11:47.592 "8ad5874d-dab3-4409-9d91-7e3f8137d05d" 00:11:47.592 ], 00:11:47.592 "product_name": "Malloc disk", 00:11:47.592 "block_size": 512, 00:11:47.592 "num_blocks": 65536, 00:11:47.592 "uuid": "8ad5874d-dab3-4409-9d91-7e3f8137d05d", 00:11:47.592 "assigned_rate_limits": { 00:11:47.592 "rw_ios_per_sec": 0, 00:11:47.592 "rw_mbytes_per_sec": 0, 00:11:47.592 "r_mbytes_per_sec": 0, 00:11:47.592 "w_mbytes_per_sec": 0 00:11:47.592 }, 00:11:47.592 "claimed": false, 00:11:47.592 "zoned": false, 00:11:47.592 "supported_io_types": { 00:11:47.592 "read": true, 00:11:47.592 "write": true, 00:11:47.592 "unmap": true, 00:11:47.592 "flush": true, 00:11:47.592 "reset": true, 00:11:47.592 "nvme_admin": false, 00:11:47.592 "nvme_io": false, 00:11:47.592 "nvme_io_md": false, 00:11:47.592 "write_zeroes": true, 00:11:47.592 "zcopy": true, 00:11:47.592 "get_zone_info": false, 00:11:47.592 "zone_management": false, 00:11:47.592 "zone_append": false, 00:11:47.592 "compare": false, 00:11:47.592 "compare_and_write": false, 00:11:47.592 "abort": true, 00:11:47.592 "seek_hole": false, 00:11:47.592 "seek_data": false, 00:11:47.592 "copy": true, 00:11:47.592 "nvme_iov_md": false 00:11:47.592 }, 00:11:47.592 "memory_domains": [ 00:11:47.592 { 00:11:47.592 "dma_device_id": "system", 00:11:47.592 "dma_device_type": 1 00:11:47.592 }, 00:11:47.592 { 00:11:47.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.592 "dma_device_type": 2 00:11:47.592 } 00:11:47.592 ], 00:11:47.592 "driver_specific": {} 00:11:47.592 } 00:11:47.592 ] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.592 [2024-09-30 12:28:59.404968] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.592 [2024-09-30 12:28:59.405086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.592 [2024-09-30 12:28:59.405129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.592 [2024-09-30 12:28:59.407175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.592 [2024-09-30 12:28:59.407271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.592 "name": "Existed_Raid", 00:11:47.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.592 "strip_size_kb": 64, 00:11:47.592 "state": "configuring", 00:11:47.592 "raid_level": "concat", 00:11:47.592 "superblock": false, 00:11:47.592 "num_base_bdevs": 4, 00:11:47.592 "num_base_bdevs_discovered": 3, 00:11:47.592 "num_base_bdevs_operational": 4, 00:11:47.592 "base_bdevs_list": [ 00:11:47.592 { 00:11:47.592 "name": "BaseBdev1", 00:11:47.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.592 "is_configured": false, 00:11:47.592 "data_offset": 0, 00:11:47.592 "data_size": 0 00:11:47.592 }, 00:11:47.592 { 00:11:47.592 "name": "BaseBdev2", 00:11:47.592 "uuid": "73c72bb5-b808-4b59-81fd-c5db4cf8c547", 00:11:47.592 "is_configured": true, 00:11:47.592 "data_offset": 0, 00:11:47.592 "data_size": 65536 00:11:47.592 }, 00:11:47.592 { 00:11:47.592 "name": "BaseBdev3", 00:11:47.592 "uuid": "e36d81ca-0df8-47e6-8fd2-a666c66b12de", 00:11:47.592 "is_configured": true, 00:11:47.592 "data_offset": 0, 00:11:47.592 "data_size": 65536 00:11:47.592 }, 00:11:47.592 { 00:11:47.592 "name": "BaseBdev4", 00:11:47.592 "uuid": "8ad5874d-dab3-4409-9d91-7e3f8137d05d", 00:11:47.592 "is_configured": true, 00:11:47.592 "data_offset": 0, 00:11:47.592 "data_size": 65536 00:11:47.592 } 00:11:47.592 ] 00:11:47.592 }' 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.592 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.161 [2024-09-30 12:28:59.844204] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.161 "name": "Existed_Raid", 00:11:48.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.161 "strip_size_kb": 64, 00:11:48.161 "state": "configuring", 00:11:48.161 "raid_level": "concat", 00:11:48.161 "superblock": false, 00:11:48.161 "num_base_bdevs": 4, 00:11:48.161 "num_base_bdevs_discovered": 2, 00:11:48.161 "num_base_bdevs_operational": 4, 00:11:48.161 "base_bdevs_list": [ 00:11:48.161 { 00:11:48.161 "name": "BaseBdev1", 00:11:48.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.161 "is_configured": false, 00:11:48.161 "data_offset": 0, 00:11:48.161 "data_size": 0 00:11:48.161 }, 00:11:48.161 { 00:11:48.161 "name": null, 00:11:48.161 "uuid": "73c72bb5-b808-4b59-81fd-c5db4cf8c547", 00:11:48.161 "is_configured": false, 00:11:48.161 "data_offset": 0, 00:11:48.161 "data_size": 65536 00:11:48.161 }, 00:11:48.161 { 00:11:48.161 "name": "BaseBdev3", 00:11:48.161 "uuid": "e36d81ca-0df8-47e6-8fd2-a666c66b12de", 00:11:48.161 "is_configured": true, 00:11:48.161 "data_offset": 0, 00:11:48.161 "data_size": 65536 00:11:48.161 }, 00:11:48.161 { 00:11:48.161 "name": "BaseBdev4", 00:11:48.161 "uuid": "8ad5874d-dab3-4409-9d91-7e3f8137d05d", 00:11:48.161 "is_configured": true, 00:11:48.161 "data_offset": 0, 00:11:48.161 "data_size": 65536 00:11:48.161 } 00:11:48.161 ] 00:11:48.161 }' 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.161 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.420 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.420 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:48.420 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.420 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.420 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.420 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:48.420 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:48.420 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.420 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.680 [2024-09-30 12:29:00.345352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.680 BaseBdev1 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.680 [ 00:11:48.680 { 00:11:48.680 "name": "BaseBdev1", 00:11:48.680 "aliases": [ 00:11:48.680 "12429680-cfe5-4040-844b-c02b5ed5e802" 00:11:48.680 ], 00:11:48.680 "product_name": "Malloc disk", 00:11:48.680 "block_size": 512, 00:11:48.680 "num_blocks": 65536, 00:11:48.680 "uuid": "12429680-cfe5-4040-844b-c02b5ed5e802", 00:11:48.680 "assigned_rate_limits": { 00:11:48.680 "rw_ios_per_sec": 0, 00:11:48.680 "rw_mbytes_per_sec": 0, 00:11:48.680 "r_mbytes_per_sec": 0, 00:11:48.680 "w_mbytes_per_sec": 0 00:11:48.680 }, 00:11:48.680 "claimed": true, 00:11:48.680 "claim_type": "exclusive_write", 00:11:48.680 "zoned": false, 00:11:48.680 "supported_io_types": { 00:11:48.680 "read": true, 00:11:48.680 "write": true, 00:11:48.680 "unmap": true, 00:11:48.680 "flush": true, 00:11:48.680 "reset": true, 00:11:48.680 "nvme_admin": false, 00:11:48.680 "nvme_io": false, 00:11:48.680 "nvme_io_md": false, 00:11:48.680 "write_zeroes": true, 00:11:48.680 "zcopy": true, 00:11:48.680 "get_zone_info": false, 00:11:48.680 "zone_management": false, 00:11:48.680 "zone_append": false, 00:11:48.680 "compare": false, 00:11:48.680 "compare_and_write": false, 00:11:48.680 "abort": true, 00:11:48.680 "seek_hole": false, 00:11:48.680 "seek_data": false, 00:11:48.680 "copy": true, 00:11:48.680 "nvme_iov_md": false 00:11:48.680 }, 00:11:48.680 "memory_domains": [ 00:11:48.680 { 00:11:48.680 "dma_device_id": "system", 00:11:48.680 "dma_device_type": 1 00:11:48.680 }, 00:11:48.680 { 00:11:48.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.680 "dma_device_type": 2 00:11:48.680 } 00:11:48.680 ], 00:11:48.680 "driver_specific": {} 00:11:48.680 } 00:11:48.680 ] 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.680 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.680 "name": "Existed_Raid", 00:11:48.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.680 "strip_size_kb": 64, 00:11:48.680 "state": "configuring", 00:11:48.680 "raid_level": "concat", 00:11:48.680 "superblock": false, 00:11:48.680 "num_base_bdevs": 4, 00:11:48.680 "num_base_bdevs_discovered": 3, 00:11:48.680 "num_base_bdevs_operational": 4, 00:11:48.680 "base_bdevs_list": [ 00:11:48.680 { 00:11:48.680 "name": "BaseBdev1", 00:11:48.681 "uuid": "12429680-cfe5-4040-844b-c02b5ed5e802", 00:11:48.681 "is_configured": true, 00:11:48.681 "data_offset": 0, 00:11:48.681 "data_size": 65536 00:11:48.681 }, 00:11:48.681 { 00:11:48.681 "name": null, 00:11:48.681 "uuid": "73c72bb5-b808-4b59-81fd-c5db4cf8c547", 00:11:48.681 "is_configured": false, 00:11:48.681 "data_offset": 0, 00:11:48.681 "data_size": 65536 00:11:48.681 }, 00:11:48.681 { 00:11:48.681 "name": "BaseBdev3", 00:11:48.681 "uuid": "e36d81ca-0df8-47e6-8fd2-a666c66b12de", 00:11:48.681 "is_configured": true, 00:11:48.681 "data_offset": 0, 00:11:48.681 "data_size": 65536 00:11:48.681 }, 00:11:48.681 { 00:11:48.681 "name": "BaseBdev4", 00:11:48.681 "uuid": "8ad5874d-dab3-4409-9d91-7e3f8137d05d", 00:11:48.681 "is_configured": true, 00:11:48.681 "data_offset": 0, 00:11:48.681 "data_size": 65536 00:11:48.681 } 00:11:48.681 ] 00:11:48.681 }' 00:11:48.681 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.681 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.940 [2024-09-30 12:29:00.788653] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.940 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.198 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.198 "name": "Existed_Raid", 00:11:49.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.198 "strip_size_kb": 64, 00:11:49.198 "state": "configuring", 00:11:49.198 "raid_level": "concat", 00:11:49.198 "superblock": false, 00:11:49.198 "num_base_bdevs": 4, 00:11:49.198 "num_base_bdevs_discovered": 2, 00:11:49.198 "num_base_bdevs_operational": 4, 00:11:49.198 "base_bdevs_list": [ 00:11:49.198 { 00:11:49.198 "name": "BaseBdev1", 00:11:49.198 "uuid": "12429680-cfe5-4040-844b-c02b5ed5e802", 00:11:49.198 "is_configured": true, 00:11:49.198 "data_offset": 0, 00:11:49.198 "data_size": 65536 00:11:49.198 }, 00:11:49.198 { 00:11:49.198 "name": null, 00:11:49.198 "uuid": "73c72bb5-b808-4b59-81fd-c5db4cf8c547", 00:11:49.198 "is_configured": false, 00:11:49.198 "data_offset": 0, 00:11:49.198 "data_size": 65536 00:11:49.198 }, 00:11:49.198 { 00:11:49.198 "name": null, 00:11:49.198 "uuid": "e36d81ca-0df8-47e6-8fd2-a666c66b12de", 00:11:49.198 "is_configured": false, 00:11:49.198 "data_offset": 0, 00:11:49.198 "data_size": 65536 00:11:49.198 }, 00:11:49.198 { 00:11:49.198 "name": "BaseBdev4", 00:11:49.198 "uuid": "8ad5874d-dab3-4409-9d91-7e3f8137d05d", 00:11:49.198 "is_configured": true, 00:11:49.198 "data_offset": 0, 00:11:49.198 "data_size": 65536 00:11:49.198 } 00:11:49.198 ] 00:11:49.198 }' 00:11:49.198 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.198 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.456 [2024-09-30 12:29:01.275868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.456 "name": "Existed_Raid", 00:11:49.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.456 "strip_size_kb": 64, 00:11:49.456 "state": "configuring", 00:11:49.456 "raid_level": "concat", 00:11:49.456 "superblock": false, 00:11:49.456 "num_base_bdevs": 4, 00:11:49.456 "num_base_bdevs_discovered": 3, 00:11:49.456 "num_base_bdevs_operational": 4, 00:11:49.456 "base_bdevs_list": [ 00:11:49.456 { 00:11:49.456 "name": "BaseBdev1", 00:11:49.456 "uuid": "12429680-cfe5-4040-844b-c02b5ed5e802", 00:11:49.456 "is_configured": true, 00:11:49.456 "data_offset": 0, 00:11:49.456 "data_size": 65536 00:11:49.456 }, 00:11:49.456 { 00:11:49.456 "name": null, 00:11:49.456 "uuid": "73c72bb5-b808-4b59-81fd-c5db4cf8c547", 00:11:49.456 "is_configured": false, 00:11:49.456 "data_offset": 0, 00:11:49.456 "data_size": 65536 00:11:49.456 }, 00:11:49.456 { 00:11:49.456 "name": "BaseBdev3", 00:11:49.456 "uuid": "e36d81ca-0df8-47e6-8fd2-a666c66b12de", 00:11:49.456 "is_configured": true, 00:11:49.456 "data_offset": 0, 00:11:49.456 "data_size": 65536 00:11:49.456 }, 00:11:49.456 { 00:11:49.456 "name": "BaseBdev4", 00:11:49.456 "uuid": "8ad5874d-dab3-4409-9d91-7e3f8137d05d", 00:11:49.456 "is_configured": true, 00:11:49.456 "data_offset": 0, 00:11:49.456 "data_size": 65536 00:11:49.456 } 00:11:49.456 ] 00:11:49.456 }' 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.456 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.024 [2024-09-30 12:29:01.719225] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.024 "name": "Existed_Raid", 00:11:50.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.024 "strip_size_kb": 64, 00:11:50.024 "state": "configuring", 00:11:50.024 "raid_level": "concat", 00:11:50.024 "superblock": false, 00:11:50.024 "num_base_bdevs": 4, 00:11:50.024 "num_base_bdevs_discovered": 2, 00:11:50.024 "num_base_bdevs_operational": 4, 00:11:50.024 "base_bdevs_list": [ 00:11:50.024 { 00:11:50.024 "name": null, 00:11:50.024 "uuid": "12429680-cfe5-4040-844b-c02b5ed5e802", 00:11:50.024 "is_configured": false, 00:11:50.024 "data_offset": 0, 00:11:50.024 "data_size": 65536 00:11:50.024 }, 00:11:50.024 { 00:11:50.024 "name": null, 00:11:50.024 "uuid": "73c72bb5-b808-4b59-81fd-c5db4cf8c547", 00:11:50.024 "is_configured": false, 00:11:50.024 "data_offset": 0, 00:11:50.024 "data_size": 65536 00:11:50.024 }, 00:11:50.024 { 00:11:50.024 "name": "BaseBdev3", 00:11:50.024 "uuid": "e36d81ca-0df8-47e6-8fd2-a666c66b12de", 00:11:50.024 "is_configured": true, 00:11:50.024 "data_offset": 0, 00:11:50.024 "data_size": 65536 00:11:50.024 }, 00:11:50.024 { 00:11:50.024 "name": "BaseBdev4", 00:11:50.024 "uuid": "8ad5874d-dab3-4409-9d91-7e3f8137d05d", 00:11:50.024 "is_configured": true, 00:11:50.024 "data_offset": 0, 00:11:50.024 "data_size": 65536 00:11:50.024 } 00:11:50.024 ] 00:11:50.024 }' 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.024 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.593 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.593 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.593 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.593 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.593 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.593 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:50.593 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:50.593 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.594 [2024-09-30 12:29:02.242019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.594 "name": "Existed_Raid", 00:11:50.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.594 "strip_size_kb": 64, 00:11:50.594 "state": "configuring", 00:11:50.594 "raid_level": "concat", 00:11:50.594 "superblock": false, 00:11:50.594 "num_base_bdevs": 4, 00:11:50.594 "num_base_bdevs_discovered": 3, 00:11:50.594 "num_base_bdevs_operational": 4, 00:11:50.594 "base_bdevs_list": [ 00:11:50.594 { 00:11:50.594 "name": null, 00:11:50.594 "uuid": "12429680-cfe5-4040-844b-c02b5ed5e802", 00:11:50.594 "is_configured": false, 00:11:50.594 "data_offset": 0, 00:11:50.594 "data_size": 65536 00:11:50.594 }, 00:11:50.594 { 00:11:50.594 "name": "BaseBdev2", 00:11:50.594 "uuid": "73c72bb5-b808-4b59-81fd-c5db4cf8c547", 00:11:50.594 "is_configured": true, 00:11:50.594 "data_offset": 0, 00:11:50.594 "data_size": 65536 00:11:50.594 }, 00:11:50.594 { 00:11:50.594 "name": "BaseBdev3", 00:11:50.594 "uuid": "e36d81ca-0df8-47e6-8fd2-a666c66b12de", 00:11:50.594 "is_configured": true, 00:11:50.594 "data_offset": 0, 00:11:50.594 "data_size": 65536 00:11:50.594 }, 00:11:50.594 { 00:11:50.594 "name": "BaseBdev4", 00:11:50.594 "uuid": "8ad5874d-dab3-4409-9d91-7e3f8137d05d", 00:11:50.594 "is_configured": true, 00:11:50.594 "data_offset": 0, 00:11:50.594 "data_size": 65536 00:11:50.594 } 00:11:50.594 ] 00:11:50.594 }' 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.594 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 12429680-cfe5-4040-844b-c02b5ed5e802 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.854 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.114 [2024-09-30 12:29:02.782338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:51.114 [2024-09-30 12:29:02.782449] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:51.114 [2024-09-30 12:29:02.782472] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:51.114 [2024-09-30 12:29:02.782800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:51.114 [2024-09-30 12:29:02.782999] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:51.114 [2024-09-30 12:29:02.783039] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:51.114 [2024-09-30 12:29:02.783315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.114 NewBaseBdev 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.114 [ 00:11:51.114 { 00:11:51.114 "name": "NewBaseBdev", 00:11:51.114 "aliases": [ 00:11:51.114 "12429680-cfe5-4040-844b-c02b5ed5e802" 00:11:51.114 ], 00:11:51.114 "product_name": "Malloc disk", 00:11:51.114 "block_size": 512, 00:11:51.114 "num_blocks": 65536, 00:11:51.114 "uuid": "12429680-cfe5-4040-844b-c02b5ed5e802", 00:11:51.114 "assigned_rate_limits": { 00:11:51.114 "rw_ios_per_sec": 0, 00:11:51.114 "rw_mbytes_per_sec": 0, 00:11:51.114 "r_mbytes_per_sec": 0, 00:11:51.114 "w_mbytes_per_sec": 0 00:11:51.114 }, 00:11:51.114 "claimed": true, 00:11:51.114 "claim_type": "exclusive_write", 00:11:51.114 "zoned": false, 00:11:51.114 "supported_io_types": { 00:11:51.114 "read": true, 00:11:51.114 "write": true, 00:11:51.114 "unmap": true, 00:11:51.114 "flush": true, 00:11:51.114 "reset": true, 00:11:51.114 "nvme_admin": false, 00:11:51.114 "nvme_io": false, 00:11:51.114 "nvme_io_md": false, 00:11:51.114 "write_zeroes": true, 00:11:51.114 "zcopy": true, 00:11:51.114 "get_zone_info": false, 00:11:51.114 "zone_management": false, 00:11:51.114 "zone_append": false, 00:11:51.114 "compare": false, 00:11:51.114 "compare_and_write": false, 00:11:51.114 "abort": true, 00:11:51.114 "seek_hole": false, 00:11:51.114 "seek_data": false, 00:11:51.114 "copy": true, 00:11:51.114 "nvme_iov_md": false 00:11:51.114 }, 00:11:51.114 "memory_domains": [ 00:11:51.114 { 00:11:51.114 "dma_device_id": "system", 00:11:51.114 "dma_device_type": 1 00:11:51.114 }, 00:11:51.114 { 00:11:51.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.114 "dma_device_type": 2 00:11:51.114 } 00:11:51.114 ], 00:11:51.114 "driver_specific": {} 00:11:51.114 } 00:11:51.114 ] 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.114 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.115 "name": "Existed_Raid", 00:11:51.115 "uuid": "abcf3a45-88a8-4790-a1dd-a7df22ec7445", 00:11:51.115 "strip_size_kb": 64, 00:11:51.115 "state": "online", 00:11:51.115 "raid_level": "concat", 00:11:51.115 "superblock": false, 00:11:51.115 "num_base_bdevs": 4, 00:11:51.115 "num_base_bdevs_discovered": 4, 00:11:51.115 "num_base_bdevs_operational": 4, 00:11:51.115 "base_bdevs_list": [ 00:11:51.115 { 00:11:51.115 "name": "NewBaseBdev", 00:11:51.115 "uuid": "12429680-cfe5-4040-844b-c02b5ed5e802", 00:11:51.115 "is_configured": true, 00:11:51.115 "data_offset": 0, 00:11:51.115 "data_size": 65536 00:11:51.115 }, 00:11:51.115 { 00:11:51.115 "name": "BaseBdev2", 00:11:51.115 "uuid": "73c72bb5-b808-4b59-81fd-c5db4cf8c547", 00:11:51.115 "is_configured": true, 00:11:51.115 "data_offset": 0, 00:11:51.115 "data_size": 65536 00:11:51.115 }, 00:11:51.115 { 00:11:51.115 "name": "BaseBdev3", 00:11:51.115 "uuid": "e36d81ca-0df8-47e6-8fd2-a666c66b12de", 00:11:51.115 "is_configured": true, 00:11:51.115 "data_offset": 0, 00:11:51.115 "data_size": 65536 00:11:51.115 }, 00:11:51.115 { 00:11:51.115 "name": "BaseBdev4", 00:11:51.115 "uuid": "8ad5874d-dab3-4409-9d91-7e3f8137d05d", 00:11:51.115 "is_configured": true, 00:11:51.115 "data_offset": 0, 00:11:51.115 "data_size": 65536 00:11:51.115 } 00:11:51.115 ] 00:11:51.115 }' 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.115 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.374 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:51.374 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:51.374 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:51.374 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:51.374 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:51.374 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:51.374 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:51.374 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:51.374 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.374 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.374 [2024-09-30 12:29:03.245881] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.374 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.634 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:51.635 "name": "Existed_Raid", 00:11:51.635 "aliases": [ 00:11:51.635 "abcf3a45-88a8-4790-a1dd-a7df22ec7445" 00:11:51.635 ], 00:11:51.635 "product_name": "Raid Volume", 00:11:51.635 "block_size": 512, 00:11:51.635 "num_blocks": 262144, 00:11:51.635 "uuid": "abcf3a45-88a8-4790-a1dd-a7df22ec7445", 00:11:51.635 "assigned_rate_limits": { 00:11:51.635 "rw_ios_per_sec": 0, 00:11:51.635 "rw_mbytes_per_sec": 0, 00:11:51.635 "r_mbytes_per_sec": 0, 00:11:51.635 "w_mbytes_per_sec": 0 00:11:51.635 }, 00:11:51.635 "claimed": false, 00:11:51.635 "zoned": false, 00:11:51.635 "supported_io_types": { 00:11:51.635 "read": true, 00:11:51.635 "write": true, 00:11:51.635 "unmap": true, 00:11:51.635 "flush": true, 00:11:51.635 "reset": true, 00:11:51.635 "nvme_admin": false, 00:11:51.635 "nvme_io": false, 00:11:51.635 "nvme_io_md": false, 00:11:51.635 "write_zeroes": true, 00:11:51.635 "zcopy": false, 00:11:51.635 "get_zone_info": false, 00:11:51.635 "zone_management": false, 00:11:51.635 "zone_append": false, 00:11:51.635 "compare": false, 00:11:51.635 "compare_and_write": false, 00:11:51.635 "abort": false, 00:11:51.635 "seek_hole": false, 00:11:51.635 "seek_data": false, 00:11:51.635 "copy": false, 00:11:51.635 "nvme_iov_md": false 00:11:51.635 }, 00:11:51.635 "memory_domains": [ 00:11:51.635 { 00:11:51.635 "dma_device_id": "system", 00:11:51.635 "dma_device_type": 1 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.635 "dma_device_type": 2 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "dma_device_id": "system", 00:11:51.635 "dma_device_type": 1 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.635 "dma_device_type": 2 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "dma_device_id": "system", 00:11:51.635 "dma_device_type": 1 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.635 "dma_device_type": 2 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "dma_device_id": "system", 00:11:51.635 "dma_device_type": 1 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.635 "dma_device_type": 2 00:11:51.635 } 00:11:51.635 ], 00:11:51.635 "driver_specific": { 00:11:51.635 "raid": { 00:11:51.635 "uuid": "abcf3a45-88a8-4790-a1dd-a7df22ec7445", 00:11:51.635 "strip_size_kb": 64, 00:11:51.635 "state": "online", 00:11:51.635 "raid_level": "concat", 00:11:51.635 "superblock": false, 00:11:51.635 "num_base_bdevs": 4, 00:11:51.635 "num_base_bdevs_discovered": 4, 00:11:51.635 "num_base_bdevs_operational": 4, 00:11:51.635 "base_bdevs_list": [ 00:11:51.635 { 00:11:51.635 "name": "NewBaseBdev", 00:11:51.635 "uuid": "12429680-cfe5-4040-844b-c02b5ed5e802", 00:11:51.635 "is_configured": true, 00:11:51.635 "data_offset": 0, 00:11:51.635 "data_size": 65536 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "name": "BaseBdev2", 00:11:51.635 "uuid": "73c72bb5-b808-4b59-81fd-c5db4cf8c547", 00:11:51.635 "is_configured": true, 00:11:51.635 "data_offset": 0, 00:11:51.635 "data_size": 65536 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "name": "BaseBdev3", 00:11:51.635 "uuid": "e36d81ca-0df8-47e6-8fd2-a666c66b12de", 00:11:51.635 "is_configured": true, 00:11:51.635 "data_offset": 0, 00:11:51.635 "data_size": 65536 00:11:51.635 }, 00:11:51.635 { 00:11:51.635 "name": "BaseBdev4", 00:11:51.635 "uuid": "8ad5874d-dab3-4409-9d91-7e3f8137d05d", 00:11:51.635 "is_configured": true, 00:11:51.635 "data_offset": 0, 00:11:51.635 "data_size": 65536 00:11:51.635 } 00:11:51.635 ] 00:11:51.635 } 00:11:51.635 } 00:11:51.635 }' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:51.635 BaseBdev2 00:11:51.635 BaseBdev3 00:11:51.635 BaseBdev4' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.635 [2024-09-30 12:29:03.505117] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.635 [2024-09-30 12:29:03.505187] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.635 [2024-09-30 12:29:03.505279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.635 [2024-09-30 12:29:03.505368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.635 [2024-09-30 12:29:03.505415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71156 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71156 ']' 00:11:51.635 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71156 00:11:51.636 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:51.636 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:51.636 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71156 00:11:51.895 killing process with pid 71156 00:11:51.895 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:51.895 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:51.895 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71156' 00:11:51.895 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71156 00:11:51.896 [2024-09-30 12:29:03.543087] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.896 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71156 00:11:52.155 [2024-09-30 12:29:03.962517] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:53.537 00:11:53.537 real 0m11.421s 00:11:53.537 user 0m17.633s 00:11:53.537 sys 0m2.169s 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.537 ************************************ 00:11:53.537 END TEST raid_state_function_test 00:11:53.537 ************************************ 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.537 12:29:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:53.537 12:29:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:53.537 12:29:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.537 12:29:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.537 ************************************ 00:11:53.537 START TEST raid_state_function_test_sb 00:11:53.537 ************************************ 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:53.537 Process raid pid: 71822 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71822 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71822' 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71822 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 71822 ']' 00:11:53.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.537 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.797 [2024-09-30 12:29:05.464657] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:53.797 [2024-09-30 12:29:05.464793] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.797 [2024-09-30 12:29:05.634575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.057 [2024-09-30 12:29:05.884960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.317 [2024-09-30 12:29:06.122933] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.317 [2024-09-30 12:29:06.123047] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.577 [2024-09-30 12:29:06.296804] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.577 [2024-09-30 12:29:06.296863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.577 [2024-09-30 12:29:06.296872] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.577 [2024-09-30 12:29:06.296881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.577 [2024-09-30 12:29:06.296888] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.577 [2024-09-30 12:29:06.296897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.577 [2024-09-30 12:29:06.296923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.577 [2024-09-30 12:29:06.296935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.577 "name": "Existed_Raid", 00:11:54.577 "uuid": "38c042a3-cd06-40ec-b282-36f9627851e9", 00:11:54.577 "strip_size_kb": 64, 00:11:54.577 "state": "configuring", 00:11:54.577 "raid_level": "concat", 00:11:54.577 "superblock": true, 00:11:54.577 "num_base_bdevs": 4, 00:11:54.577 "num_base_bdevs_discovered": 0, 00:11:54.577 "num_base_bdevs_operational": 4, 00:11:54.577 "base_bdevs_list": [ 00:11:54.577 { 00:11:54.577 "name": "BaseBdev1", 00:11:54.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.577 "is_configured": false, 00:11:54.577 "data_offset": 0, 00:11:54.577 "data_size": 0 00:11:54.577 }, 00:11:54.577 { 00:11:54.577 "name": "BaseBdev2", 00:11:54.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.577 "is_configured": false, 00:11:54.577 "data_offset": 0, 00:11:54.577 "data_size": 0 00:11:54.577 }, 00:11:54.577 { 00:11:54.577 "name": "BaseBdev3", 00:11:54.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.577 "is_configured": false, 00:11:54.577 "data_offset": 0, 00:11:54.577 "data_size": 0 00:11:54.577 }, 00:11:54.577 { 00:11:54.577 "name": "BaseBdev4", 00:11:54.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.577 "is_configured": false, 00:11:54.577 "data_offset": 0, 00:11:54.577 "data_size": 0 00:11:54.577 } 00:11:54.577 ] 00:11:54.577 }' 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.577 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.837 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.837 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.837 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.837 [2024-09-30 12:29:06.727892] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.837 [2024-09-30 12:29:06.727939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.097 [2024-09-30 12:29:06.735946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.097 [2024-09-30 12:29:06.736037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.097 [2024-09-30 12:29:06.736062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.097 [2024-09-30 12:29:06.736074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.097 [2024-09-30 12:29:06.736081] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.097 [2024-09-30 12:29:06.736091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.097 [2024-09-30 12:29:06.736097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:55.097 [2024-09-30 12:29:06.736106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.097 [2024-09-30 12:29:06.810007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.097 BaseBdev1 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.097 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.097 [ 00:11:55.097 { 00:11:55.097 "name": "BaseBdev1", 00:11:55.097 "aliases": [ 00:11:55.097 "3ef9eb24-0e96-4c7d-abf5-9af7d8a18b32" 00:11:55.097 ], 00:11:55.097 "product_name": "Malloc disk", 00:11:55.097 "block_size": 512, 00:11:55.097 "num_blocks": 65536, 00:11:55.098 "uuid": "3ef9eb24-0e96-4c7d-abf5-9af7d8a18b32", 00:11:55.098 "assigned_rate_limits": { 00:11:55.098 "rw_ios_per_sec": 0, 00:11:55.098 "rw_mbytes_per_sec": 0, 00:11:55.098 "r_mbytes_per_sec": 0, 00:11:55.098 "w_mbytes_per_sec": 0 00:11:55.098 }, 00:11:55.098 "claimed": true, 00:11:55.098 "claim_type": "exclusive_write", 00:11:55.098 "zoned": false, 00:11:55.098 "supported_io_types": { 00:11:55.098 "read": true, 00:11:55.098 "write": true, 00:11:55.098 "unmap": true, 00:11:55.098 "flush": true, 00:11:55.098 "reset": true, 00:11:55.098 "nvme_admin": false, 00:11:55.098 "nvme_io": false, 00:11:55.098 "nvme_io_md": false, 00:11:55.098 "write_zeroes": true, 00:11:55.098 "zcopy": true, 00:11:55.098 "get_zone_info": false, 00:11:55.098 "zone_management": false, 00:11:55.098 "zone_append": false, 00:11:55.098 "compare": false, 00:11:55.098 "compare_and_write": false, 00:11:55.098 "abort": true, 00:11:55.098 "seek_hole": false, 00:11:55.098 "seek_data": false, 00:11:55.098 "copy": true, 00:11:55.098 "nvme_iov_md": false 00:11:55.098 }, 00:11:55.098 "memory_domains": [ 00:11:55.098 { 00:11:55.098 "dma_device_id": "system", 00:11:55.098 "dma_device_type": 1 00:11:55.098 }, 00:11:55.098 { 00:11:55.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.098 "dma_device_type": 2 00:11:55.098 } 00:11:55.098 ], 00:11:55.098 "driver_specific": {} 00:11:55.098 } 00:11:55.098 ] 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.098 "name": "Existed_Raid", 00:11:55.098 "uuid": "3004ffee-cc23-4f7b-a532-8a200e007200", 00:11:55.098 "strip_size_kb": 64, 00:11:55.098 "state": "configuring", 00:11:55.098 "raid_level": "concat", 00:11:55.098 "superblock": true, 00:11:55.098 "num_base_bdevs": 4, 00:11:55.098 "num_base_bdevs_discovered": 1, 00:11:55.098 "num_base_bdevs_operational": 4, 00:11:55.098 "base_bdevs_list": [ 00:11:55.098 { 00:11:55.098 "name": "BaseBdev1", 00:11:55.098 "uuid": "3ef9eb24-0e96-4c7d-abf5-9af7d8a18b32", 00:11:55.098 "is_configured": true, 00:11:55.098 "data_offset": 2048, 00:11:55.098 "data_size": 63488 00:11:55.098 }, 00:11:55.098 { 00:11:55.098 "name": "BaseBdev2", 00:11:55.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.098 "is_configured": false, 00:11:55.098 "data_offset": 0, 00:11:55.098 "data_size": 0 00:11:55.098 }, 00:11:55.098 { 00:11:55.098 "name": "BaseBdev3", 00:11:55.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.098 "is_configured": false, 00:11:55.098 "data_offset": 0, 00:11:55.098 "data_size": 0 00:11:55.098 }, 00:11:55.098 { 00:11:55.098 "name": "BaseBdev4", 00:11:55.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.098 "is_configured": false, 00:11:55.098 "data_offset": 0, 00:11:55.098 "data_size": 0 00:11:55.098 } 00:11:55.098 ] 00:11:55.098 }' 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.098 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.667 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.667 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.667 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.667 [2024-09-30 12:29:07.265257] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.667 [2024-09-30 12:29:07.265305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.668 [2024-09-30 12:29:07.277306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.668 [2024-09-30 12:29:07.279446] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.668 [2024-09-30 12:29:07.279524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.668 [2024-09-30 12:29:07.279563] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.668 [2024-09-30 12:29:07.279587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.668 [2024-09-30 12:29:07.279605] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:55.668 [2024-09-30 12:29:07.279625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.668 "name": "Existed_Raid", 00:11:55.668 "uuid": "83674dc4-a405-4506-9320-29ab3f8a0f0d", 00:11:55.668 "strip_size_kb": 64, 00:11:55.668 "state": "configuring", 00:11:55.668 "raid_level": "concat", 00:11:55.668 "superblock": true, 00:11:55.668 "num_base_bdevs": 4, 00:11:55.668 "num_base_bdevs_discovered": 1, 00:11:55.668 "num_base_bdevs_operational": 4, 00:11:55.668 "base_bdevs_list": [ 00:11:55.668 { 00:11:55.668 "name": "BaseBdev1", 00:11:55.668 "uuid": "3ef9eb24-0e96-4c7d-abf5-9af7d8a18b32", 00:11:55.668 "is_configured": true, 00:11:55.668 "data_offset": 2048, 00:11:55.668 "data_size": 63488 00:11:55.668 }, 00:11:55.668 { 00:11:55.668 "name": "BaseBdev2", 00:11:55.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.668 "is_configured": false, 00:11:55.668 "data_offset": 0, 00:11:55.668 "data_size": 0 00:11:55.668 }, 00:11:55.668 { 00:11:55.668 "name": "BaseBdev3", 00:11:55.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.668 "is_configured": false, 00:11:55.668 "data_offset": 0, 00:11:55.668 "data_size": 0 00:11:55.668 }, 00:11:55.668 { 00:11:55.668 "name": "BaseBdev4", 00:11:55.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.668 "is_configured": false, 00:11:55.668 "data_offset": 0, 00:11:55.668 "data_size": 0 00:11:55.668 } 00:11:55.668 ] 00:11:55.668 }' 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.668 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.929 [2024-09-30 12:29:07.727613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.929 BaseBdev2 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.929 [ 00:11:55.929 { 00:11:55.929 "name": "BaseBdev2", 00:11:55.929 "aliases": [ 00:11:55.929 "49831b98-7d19-4745-b12e-1f5eb3d489aa" 00:11:55.929 ], 00:11:55.929 "product_name": "Malloc disk", 00:11:55.929 "block_size": 512, 00:11:55.929 "num_blocks": 65536, 00:11:55.929 "uuid": "49831b98-7d19-4745-b12e-1f5eb3d489aa", 00:11:55.929 "assigned_rate_limits": { 00:11:55.929 "rw_ios_per_sec": 0, 00:11:55.929 "rw_mbytes_per_sec": 0, 00:11:55.929 "r_mbytes_per_sec": 0, 00:11:55.929 "w_mbytes_per_sec": 0 00:11:55.929 }, 00:11:55.929 "claimed": true, 00:11:55.929 "claim_type": "exclusive_write", 00:11:55.929 "zoned": false, 00:11:55.929 "supported_io_types": { 00:11:55.929 "read": true, 00:11:55.929 "write": true, 00:11:55.929 "unmap": true, 00:11:55.929 "flush": true, 00:11:55.929 "reset": true, 00:11:55.929 "nvme_admin": false, 00:11:55.929 "nvme_io": false, 00:11:55.929 "nvme_io_md": false, 00:11:55.929 "write_zeroes": true, 00:11:55.929 "zcopy": true, 00:11:55.929 "get_zone_info": false, 00:11:55.929 "zone_management": false, 00:11:55.929 "zone_append": false, 00:11:55.929 "compare": false, 00:11:55.929 "compare_and_write": false, 00:11:55.929 "abort": true, 00:11:55.929 "seek_hole": false, 00:11:55.929 "seek_data": false, 00:11:55.929 "copy": true, 00:11:55.929 "nvme_iov_md": false 00:11:55.929 }, 00:11:55.929 "memory_domains": [ 00:11:55.929 { 00:11:55.929 "dma_device_id": "system", 00:11:55.929 "dma_device_type": 1 00:11:55.929 }, 00:11:55.929 { 00:11:55.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.929 "dma_device_type": 2 00:11:55.929 } 00:11:55.929 ], 00:11:55.929 "driver_specific": {} 00:11:55.929 } 00:11:55.929 ] 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.929 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.929 "name": "Existed_Raid", 00:11:55.929 "uuid": "83674dc4-a405-4506-9320-29ab3f8a0f0d", 00:11:55.929 "strip_size_kb": 64, 00:11:55.929 "state": "configuring", 00:11:55.929 "raid_level": "concat", 00:11:55.929 "superblock": true, 00:11:55.929 "num_base_bdevs": 4, 00:11:55.929 "num_base_bdevs_discovered": 2, 00:11:55.929 "num_base_bdevs_operational": 4, 00:11:55.929 "base_bdevs_list": [ 00:11:55.930 { 00:11:55.930 "name": "BaseBdev1", 00:11:55.930 "uuid": "3ef9eb24-0e96-4c7d-abf5-9af7d8a18b32", 00:11:55.930 "is_configured": true, 00:11:55.930 "data_offset": 2048, 00:11:55.930 "data_size": 63488 00:11:55.930 }, 00:11:55.930 { 00:11:55.930 "name": "BaseBdev2", 00:11:55.930 "uuid": "49831b98-7d19-4745-b12e-1f5eb3d489aa", 00:11:55.930 "is_configured": true, 00:11:55.930 "data_offset": 2048, 00:11:55.930 "data_size": 63488 00:11:55.930 }, 00:11:55.930 { 00:11:55.930 "name": "BaseBdev3", 00:11:55.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.930 "is_configured": false, 00:11:55.930 "data_offset": 0, 00:11:55.930 "data_size": 0 00:11:55.930 }, 00:11:55.930 { 00:11:55.930 "name": "BaseBdev4", 00:11:55.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.930 "is_configured": false, 00:11:55.930 "data_offset": 0, 00:11:55.930 "data_size": 0 00:11:55.930 } 00:11:55.930 ] 00:11:55.930 }' 00:11:55.930 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.930 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.499 [2024-09-30 12:29:08.197559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.499 BaseBdev3 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.499 [ 00:11:56.499 { 00:11:56.499 "name": "BaseBdev3", 00:11:56.499 "aliases": [ 00:11:56.499 "91769627-7cf5-485c-ab63-55333d69978b" 00:11:56.499 ], 00:11:56.499 "product_name": "Malloc disk", 00:11:56.499 "block_size": 512, 00:11:56.499 "num_blocks": 65536, 00:11:56.499 "uuid": "91769627-7cf5-485c-ab63-55333d69978b", 00:11:56.499 "assigned_rate_limits": { 00:11:56.499 "rw_ios_per_sec": 0, 00:11:56.499 "rw_mbytes_per_sec": 0, 00:11:56.499 "r_mbytes_per_sec": 0, 00:11:56.499 "w_mbytes_per_sec": 0 00:11:56.499 }, 00:11:56.499 "claimed": true, 00:11:56.499 "claim_type": "exclusive_write", 00:11:56.499 "zoned": false, 00:11:56.499 "supported_io_types": { 00:11:56.499 "read": true, 00:11:56.499 "write": true, 00:11:56.499 "unmap": true, 00:11:56.499 "flush": true, 00:11:56.499 "reset": true, 00:11:56.499 "nvme_admin": false, 00:11:56.499 "nvme_io": false, 00:11:56.499 "nvme_io_md": false, 00:11:56.499 "write_zeroes": true, 00:11:56.499 "zcopy": true, 00:11:56.499 "get_zone_info": false, 00:11:56.499 "zone_management": false, 00:11:56.499 "zone_append": false, 00:11:56.499 "compare": false, 00:11:56.499 "compare_and_write": false, 00:11:56.499 "abort": true, 00:11:56.499 "seek_hole": false, 00:11:56.499 "seek_data": false, 00:11:56.499 "copy": true, 00:11:56.499 "nvme_iov_md": false 00:11:56.499 }, 00:11:56.499 "memory_domains": [ 00:11:56.499 { 00:11:56.499 "dma_device_id": "system", 00:11:56.499 "dma_device_type": 1 00:11:56.499 }, 00:11:56.499 { 00:11:56.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.499 "dma_device_type": 2 00:11:56.499 } 00:11:56.499 ], 00:11:56.499 "driver_specific": {} 00:11:56.499 } 00:11:56.499 ] 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.499 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.499 "name": "Existed_Raid", 00:11:56.499 "uuid": "83674dc4-a405-4506-9320-29ab3f8a0f0d", 00:11:56.499 "strip_size_kb": 64, 00:11:56.499 "state": "configuring", 00:11:56.499 "raid_level": "concat", 00:11:56.499 "superblock": true, 00:11:56.499 "num_base_bdevs": 4, 00:11:56.499 "num_base_bdevs_discovered": 3, 00:11:56.499 "num_base_bdevs_operational": 4, 00:11:56.499 "base_bdevs_list": [ 00:11:56.499 { 00:11:56.499 "name": "BaseBdev1", 00:11:56.499 "uuid": "3ef9eb24-0e96-4c7d-abf5-9af7d8a18b32", 00:11:56.499 "is_configured": true, 00:11:56.499 "data_offset": 2048, 00:11:56.499 "data_size": 63488 00:11:56.499 }, 00:11:56.499 { 00:11:56.499 "name": "BaseBdev2", 00:11:56.499 "uuid": "49831b98-7d19-4745-b12e-1f5eb3d489aa", 00:11:56.499 "is_configured": true, 00:11:56.499 "data_offset": 2048, 00:11:56.499 "data_size": 63488 00:11:56.499 }, 00:11:56.499 { 00:11:56.499 "name": "BaseBdev3", 00:11:56.500 "uuid": "91769627-7cf5-485c-ab63-55333d69978b", 00:11:56.500 "is_configured": true, 00:11:56.500 "data_offset": 2048, 00:11:56.500 "data_size": 63488 00:11:56.500 }, 00:11:56.500 { 00:11:56.500 "name": "BaseBdev4", 00:11:56.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.500 "is_configured": false, 00:11:56.500 "data_offset": 0, 00:11:56.500 "data_size": 0 00:11:56.500 } 00:11:56.500 ] 00:11:56.500 }' 00:11:56.500 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.500 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.068 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:57.068 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.068 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.068 [2024-09-30 12:29:08.732241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.068 [2024-09-30 12:29:08.732647] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:57.068 [2024-09-30 12:29:08.732722] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:57.069 [2024-09-30 12:29:08.733047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:57.069 [2024-09-30 12:29:08.733249] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:57.069 [2024-09-30 12:29:08.733296] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:57.069 BaseBdev4 00:11:57.069 [2024-09-30 12:29:08.733473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.069 [ 00:11:57.069 { 00:11:57.069 "name": "BaseBdev4", 00:11:57.069 "aliases": [ 00:11:57.069 "85b31b17-632b-4e8f-b82e-79baf4523b5b" 00:11:57.069 ], 00:11:57.069 "product_name": "Malloc disk", 00:11:57.069 "block_size": 512, 00:11:57.069 "num_blocks": 65536, 00:11:57.069 "uuid": "85b31b17-632b-4e8f-b82e-79baf4523b5b", 00:11:57.069 "assigned_rate_limits": { 00:11:57.069 "rw_ios_per_sec": 0, 00:11:57.069 "rw_mbytes_per_sec": 0, 00:11:57.069 "r_mbytes_per_sec": 0, 00:11:57.069 "w_mbytes_per_sec": 0 00:11:57.069 }, 00:11:57.069 "claimed": true, 00:11:57.069 "claim_type": "exclusive_write", 00:11:57.069 "zoned": false, 00:11:57.069 "supported_io_types": { 00:11:57.069 "read": true, 00:11:57.069 "write": true, 00:11:57.069 "unmap": true, 00:11:57.069 "flush": true, 00:11:57.069 "reset": true, 00:11:57.069 "nvme_admin": false, 00:11:57.069 "nvme_io": false, 00:11:57.069 "nvme_io_md": false, 00:11:57.069 "write_zeroes": true, 00:11:57.069 "zcopy": true, 00:11:57.069 "get_zone_info": false, 00:11:57.069 "zone_management": false, 00:11:57.069 "zone_append": false, 00:11:57.069 "compare": false, 00:11:57.069 "compare_and_write": false, 00:11:57.069 "abort": true, 00:11:57.069 "seek_hole": false, 00:11:57.069 "seek_data": false, 00:11:57.069 "copy": true, 00:11:57.069 "nvme_iov_md": false 00:11:57.069 }, 00:11:57.069 "memory_domains": [ 00:11:57.069 { 00:11:57.069 "dma_device_id": "system", 00:11:57.069 "dma_device_type": 1 00:11:57.069 }, 00:11:57.069 { 00:11:57.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.069 "dma_device_type": 2 00:11:57.069 } 00:11:57.069 ], 00:11:57.069 "driver_specific": {} 00:11:57.069 } 00:11:57.069 ] 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.069 "name": "Existed_Raid", 00:11:57.069 "uuid": "83674dc4-a405-4506-9320-29ab3f8a0f0d", 00:11:57.069 "strip_size_kb": 64, 00:11:57.069 "state": "online", 00:11:57.069 "raid_level": "concat", 00:11:57.069 "superblock": true, 00:11:57.069 "num_base_bdevs": 4, 00:11:57.069 "num_base_bdevs_discovered": 4, 00:11:57.069 "num_base_bdevs_operational": 4, 00:11:57.069 "base_bdevs_list": [ 00:11:57.069 { 00:11:57.069 "name": "BaseBdev1", 00:11:57.069 "uuid": "3ef9eb24-0e96-4c7d-abf5-9af7d8a18b32", 00:11:57.069 "is_configured": true, 00:11:57.069 "data_offset": 2048, 00:11:57.069 "data_size": 63488 00:11:57.069 }, 00:11:57.069 { 00:11:57.069 "name": "BaseBdev2", 00:11:57.069 "uuid": "49831b98-7d19-4745-b12e-1f5eb3d489aa", 00:11:57.069 "is_configured": true, 00:11:57.069 "data_offset": 2048, 00:11:57.069 "data_size": 63488 00:11:57.069 }, 00:11:57.069 { 00:11:57.069 "name": "BaseBdev3", 00:11:57.069 "uuid": "91769627-7cf5-485c-ab63-55333d69978b", 00:11:57.069 "is_configured": true, 00:11:57.069 "data_offset": 2048, 00:11:57.069 "data_size": 63488 00:11:57.069 }, 00:11:57.069 { 00:11:57.069 "name": "BaseBdev4", 00:11:57.069 "uuid": "85b31b17-632b-4e8f-b82e-79baf4523b5b", 00:11:57.069 "is_configured": true, 00:11:57.069 "data_offset": 2048, 00:11:57.069 "data_size": 63488 00:11:57.069 } 00:11:57.069 ] 00:11:57.069 }' 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.069 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.638 [2024-09-30 12:29:09.259717] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.638 "name": "Existed_Raid", 00:11:57.638 "aliases": [ 00:11:57.638 "83674dc4-a405-4506-9320-29ab3f8a0f0d" 00:11:57.638 ], 00:11:57.638 "product_name": "Raid Volume", 00:11:57.638 "block_size": 512, 00:11:57.638 "num_blocks": 253952, 00:11:57.638 "uuid": "83674dc4-a405-4506-9320-29ab3f8a0f0d", 00:11:57.638 "assigned_rate_limits": { 00:11:57.638 "rw_ios_per_sec": 0, 00:11:57.638 "rw_mbytes_per_sec": 0, 00:11:57.638 "r_mbytes_per_sec": 0, 00:11:57.638 "w_mbytes_per_sec": 0 00:11:57.638 }, 00:11:57.638 "claimed": false, 00:11:57.638 "zoned": false, 00:11:57.638 "supported_io_types": { 00:11:57.638 "read": true, 00:11:57.638 "write": true, 00:11:57.638 "unmap": true, 00:11:57.638 "flush": true, 00:11:57.638 "reset": true, 00:11:57.638 "nvme_admin": false, 00:11:57.638 "nvme_io": false, 00:11:57.638 "nvme_io_md": false, 00:11:57.638 "write_zeroes": true, 00:11:57.638 "zcopy": false, 00:11:57.638 "get_zone_info": false, 00:11:57.638 "zone_management": false, 00:11:57.638 "zone_append": false, 00:11:57.638 "compare": false, 00:11:57.638 "compare_and_write": false, 00:11:57.638 "abort": false, 00:11:57.638 "seek_hole": false, 00:11:57.638 "seek_data": false, 00:11:57.638 "copy": false, 00:11:57.638 "nvme_iov_md": false 00:11:57.638 }, 00:11:57.638 "memory_domains": [ 00:11:57.638 { 00:11:57.638 "dma_device_id": "system", 00:11:57.638 "dma_device_type": 1 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.638 "dma_device_type": 2 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "dma_device_id": "system", 00:11:57.638 "dma_device_type": 1 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.638 "dma_device_type": 2 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "dma_device_id": "system", 00:11:57.638 "dma_device_type": 1 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.638 "dma_device_type": 2 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "dma_device_id": "system", 00:11:57.638 "dma_device_type": 1 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.638 "dma_device_type": 2 00:11:57.638 } 00:11:57.638 ], 00:11:57.638 "driver_specific": { 00:11:57.638 "raid": { 00:11:57.638 "uuid": "83674dc4-a405-4506-9320-29ab3f8a0f0d", 00:11:57.638 "strip_size_kb": 64, 00:11:57.638 "state": "online", 00:11:57.638 "raid_level": "concat", 00:11:57.638 "superblock": true, 00:11:57.638 "num_base_bdevs": 4, 00:11:57.638 "num_base_bdevs_discovered": 4, 00:11:57.638 "num_base_bdevs_operational": 4, 00:11:57.638 "base_bdevs_list": [ 00:11:57.638 { 00:11:57.638 "name": "BaseBdev1", 00:11:57.638 "uuid": "3ef9eb24-0e96-4c7d-abf5-9af7d8a18b32", 00:11:57.638 "is_configured": true, 00:11:57.638 "data_offset": 2048, 00:11:57.638 "data_size": 63488 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "name": "BaseBdev2", 00:11:57.638 "uuid": "49831b98-7d19-4745-b12e-1f5eb3d489aa", 00:11:57.638 "is_configured": true, 00:11:57.638 "data_offset": 2048, 00:11:57.638 "data_size": 63488 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "name": "BaseBdev3", 00:11:57.638 "uuid": "91769627-7cf5-485c-ab63-55333d69978b", 00:11:57.638 "is_configured": true, 00:11:57.638 "data_offset": 2048, 00:11:57.638 "data_size": 63488 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "name": "BaseBdev4", 00:11:57.638 "uuid": "85b31b17-632b-4e8f-b82e-79baf4523b5b", 00:11:57.638 "is_configured": true, 00:11:57.638 "data_offset": 2048, 00:11:57.638 "data_size": 63488 00:11:57.638 } 00:11:57.638 ] 00:11:57.638 } 00:11:57.638 } 00:11:57.638 }' 00:11:57.638 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:57.639 BaseBdev2 00:11:57.639 BaseBdev3 00:11:57.639 BaseBdev4' 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.639 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.899 [2024-09-30 12:29:09.570883] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.899 [2024-09-30 12:29:09.570958] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.899 [2024-09-30 12:29:09.571030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.899 "name": "Existed_Raid", 00:11:57.899 "uuid": "83674dc4-a405-4506-9320-29ab3f8a0f0d", 00:11:57.899 "strip_size_kb": 64, 00:11:57.899 "state": "offline", 00:11:57.899 "raid_level": "concat", 00:11:57.899 "superblock": true, 00:11:57.899 "num_base_bdevs": 4, 00:11:57.899 "num_base_bdevs_discovered": 3, 00:11:57.899 "num_base_bdevs_operational": 3, 00:11:57.899 "base_bdevs_list": [ 00:11:57.899 { 00:11:57.899 "name": null, 00:11:57.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.899 "is_configured": false, 00:11:57.899 "data_offset": 0, 00:11:57.899 "data_size": 63488 00:11:57.899 }, 00:11:57.899 { 00:11:57.899 "name": "BaseBdev2", 00:11:57.899 "uuid": "49831b98-7d19-4745-b12e-1f5eb3d489aa", 00:11:57.899 "is_configured": true, 00:11:57.899 "data_offset": 2048, 00:11:57.899 "data_size": 63488 00:11:57.899 }, 00:11:57.899 { 00:11:57.899 "name": "BaseBdev3", 00:11:57.899 "uuid": "91769627-7cf5-485c-ab63-55333d69978b", 00:11:57.899 "is_configured": true, 00:11:57.899 "data_offset": 2048, 00:11:57.899 "data_size": 63488 00:11:57.899 }, 00:11:57.899 { 00:11:57.899 "name": "BaseBdev4", 00:11:57.899 "uuid": "85b31b17-632b-4e8f-b82e-79baf4523b5b", 00:11:57.899 "is_configured": true, 00:11:57.899 "data_offset": 2048, 00:11:57.899 "data_size": 63488 00:11:57.899 } 00:11:57.899 ] 00:11:57.899 }' 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.899 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.468 [2024-09-30 12:29:10.158667] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.468 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.469 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.469 [2024-09-30 12:29:10.317915] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.728 [2024-09-30 12:29:10.474780] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:58.728 [2024-09-30 12:29:10.474880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.728 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.988 BaseBdev2 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.988 [ 00:11:58.988 { 00:11:58.988 "name": "BaseBdev2", 00:11:58.988 "aliases": [ 00:11:58.988 "6d40c353-1fed-4a04-b60b-b8bf1712094d" 00:11:58.988 ], 00:11:58.988 "product_name": "Malloc disk", 00:11:58.988 "block_size": 512, 00:11:58.988 "num_blocks": 65536, 00:11:58.988 "uuid": "6d40c353-1fed-4a04-b60b-b8bf1712094d", 00:11:58.988 "assigned_rate_limits": { 00:11:58.988 "rw_ios_per_sec": 0, 00:11:58.988 "rw_mbytes_per_sec": 0, 00:11:58.988 "r_mbytes_per_sec": 0, 00:11:58.988 "w_mbytes_per_sec": 0 00:11:58.988 }, 00:11:58.988 "claimed": false, 00:11:58.988 "zoned": false, 00:11:58.988 "supported_io_types": { 00:11:58.988 "read": true, 00:11:58.988 "write": true, 00:11:58.988 "unmap": true, 00:11:58.988 "flush": true, 00:11:58.988 "reset": true, 00:11:58.988 "nvme_admin": false, 00:11:58.988 "nvme_io": false, 00:11:58.988 "nvme_io_md": false, 00:11:58.988 "write_zeroes": true, 00:11:58.988 "zcopy": true, 00:11:58.988 "get_zone_info": false, 00:11:58.988 "zone_management": false, 00:11:58.988 "zone_append": false, 00:11:58.988 "compare": false, 00:11:58.988 "compare_and_write": false, 00:11:58.988 "abort": true, 00:11:58.988 "seek_hole": false, 00:11:58.988 "seek_data": false, 00:11:58.988 "copy": true, 00:11:58.988 "nvme_iov_md": false 00:11:58.988 }, 00:11:58.988 "memory_domains": [ 00:11:58.988 { 00:11:58.988 "dma_device_id": "system", 00:11:58.988 "dma_device_type": 1 00:11:58.988 }, 00:11:58.988 { 00:11:58.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.988 "dma_device_type": 2 00:11:58.988 } 00:11:58.988 ], 00:11:58.988 "driver_specific": {} 00:11:58.988 } 00:11:58.988 ] 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.988 BaseBdev3 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:58.988 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.989 [ 00:11:58.989 { 00:11:58.989 "name": "BaseBdev3", 00:11:58.989 "aliases": [ 00:11:58.989 "6bf20608-5de6-4d1c-ad18-029b62ce896e" 00:11:58.989 ], 00:11:58.989 "product_name": "Malloc disk", 00:11:58.989 "block_size": 512, 00:11:58.989 "num_blocks": 65536, 00:11:58.989 "uuid": "6bf20608-5de6-4d1c-ad18-029b62ce896e", 00:11:58.989 "assigned_rate_limits": { 00:11:58.989 "rw_ios_per_sec": 0, 00:11:58.989 "rw_mbytes_per_sec": 0, 00:11:58.989 "r_mbytes_per_sec": 0, 00:11:58.989 "w_mbytes_per_sec": 0 00:11:58.989 }, 00:11:58.989 "claimed": false, 00:11:58.989 "zoned": false, 00:11:58.989 "supported_io_types": { 00:11:58.989 "read": true, 00:11:58.989 "write": true, 00:11:58.989 "unmap": true, 00:11:58.989 "flush": true, 00:11:58.989 "reset": true, 00:11:58.989 "nvme_admin": false, 00:11:58.989 "nvme_io": false, 00:11:58.989 "nvme_io_md": false, 00:11:58.989 "write_zeroes": true, 00:11:58.989 "zcopy": true, 00:11:58.989 "get_zone_info": false, 00:11:58.989 "zone_management": false, 00:11:58.989 "zone_append": false, 00:11:58.989 "compare": false, 00:11:58.989 "compare_and_write": false, 00:11:58.989 "abort": true, 00:11:58.989 "seek_hole": false, 00:11:58.989 "seek_data": false, 00:11:58.989 "copy": true, 00:11:58.989 "nvme_iov_md": false 00:11:58.989 }, 00:11:58.989 "memory_domains": [ 00:11:58.989 { 00:11:58.989 "dma_device_id": "system", 00:11:58.989 "dma_device_type": 1 00:11:58.989 }, 00:11:58.989 { 00:11:58.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.989 "dma_device_type": 2 00:11:58.989 } 00:11:58.989 ], 00:11:58.989 "driver_specific": {} 00:11:58.989 } 00:11:58.989 ] 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.989 BaseBdev4 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.989 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.989 [ 00:11:58.989 { 00:11:58.989 "name": "BaseBdev4", 00:11:58.989 "aliases": [ 00:11:58.989 "6f597457-2c75-4fd8-8d94-d878df57d3da" 00:11:58.989 ], 00:11:58.989 "product_name": "Malloc disk", 00:11:58.989 "block_size": 512, 00:11:58.989 "num_blocks": 65536, 00:11:58.989 "uuid": "6f597457-2c75-4fd8-8d94-d878df57d3da", 00:11:58.989 "assigned_rate_limits": { 00:11:58.989 "rw_ios_per_sec": 0, 00:11:58.989 "rw_mbytes_per_sec": 0, 00:11:58.989 "r_mbytes_per_sec": 0, 00:11:58.989 "w_mbytes_per_sec": 0 00:11:58.989 }, 00:11:58.989 "claimed": false, 00:11:58.989 "zoned": false, 00:11:58.989 "supported_io_types": { 00:11:58.989 "read": true, 00:11:58.989 "write": true, 00:11:58.989 "unmap": true, 00:11:58.989 "flush": true, 00:11:58.989 "reset": true, 00:11:58.989 "nvme_admin": false, 00:11:58.989 "nvme_io": false, 00:11:58.989 "nvme_io_md": false, 00:11:58.989 "write_zeroes": true, 00:11:58.989 "zcopy": true, 00:11:58.989 "get_zone_info": false, 00:11:58.989 "zone_management": false, 00:11:58.989 "zone_append": false, 00:11:58.989 "compare": false, 00:11:58.989 "compare_and_write": false, 00:11:58.989 "abort": true, 00:11:58.989 "seek_hole": false, 00:11:58.989 "seek_data": false, 00:11:58.989 "copy": true, 00:11:58.989 "nvme_iov_md": false 00:11:58.989 }, 00:11:58.989 "memory_domains": [ 00:11:58.989 { 00:11:58.989 "dma_device_id": "system", 00:11:58.989 "dma_device_type": 1 00:11:58.989 }, 00:11:58.989 { 00:11:58.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.989 "dma_device_type": 2 00:11:58.989 } 00:11:58.989 ], 00:11:58.989 "driver_specific": {} 00:11:58.989 } 00:11:58.989 ] 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.249 [2024-09-30 12:29:10.890945] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.249 [2024-09-30 12:29:10.891064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.249 [2024-09-30 12:29:10.891107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.249 [2024-09-30 12:29:10.893220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.249 [2024-09-30 12:29:10.893312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.249 "name": "Existed_Raid", 00:11:59.249 "uuid": "4655042f-a091-4cda-9fc8-86b01670de7d", 00:11:59.249 "strip_size_kb": 64, 00:11:59.249 "state": "configuring", 00:11:59.249 "raid_level": "concat", 00:11:59.249 "superblock": true, 00:11:59.249 "num_base_bdevs": 4, 00:11:59.249 "num_base_bdevs_discovered": 3, 00:11:59.249 "num_base_bdevs_operational": 4, 00:11:59.249 "base_bdevs_list": [ 00:11:59.249 { 00:11:59.249 "name": "BaseBdev1", 00:11:59.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.249 "is_configured": false, 00:11:59.249 "data_offset": 0, 00:11:59.249 "data_size": 0 00:11:59.249 }, 00:11:59.249 { 00:11:59.249 "name": "BaseBdev2", 00:11:59.249 "uuid": "6d40c353-1fed-4a04-b60b-b8bf1712094d", 00:11:59.249 "is_configured": true, 00:11:59.249 "data_offset": 2048, 00:11:59.249 "data_size": 63488 00:11:59.249 }, 00:11:59.249 { 00:11:59.249 "name": "BaseBdev3", 00:11:59.249 "uuid": "6bf20608-5de6-4d1c-ad18-029b62ce896e", 00:11:59.249 "is_configured": true, 00:11:59.249 "data_offset": 2048, 00:11:59.249 "data_size": 63488 00:11:59.249 }, 00:11:59.249 { 00:11:59.249 "name": "BaseBdev4", 00:11:59.249 "uuid": "6f597457-2c75-4fd8-8d94-d878df57d3da", 00:11:59.249 "is_configured": true, 00:11:59.249 "data_offset": 2048, 00:11:59.249 "data_size": 63488 00:11:59.249 } 00:11:59.249 ] 00:11:59.249 }' 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.249 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.509 [2024-09-30 12:29:11.346135] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.509 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.769 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.769 "name": "Existed_Raid", 00:11:59.769 "uuid": "4655042f-a091-4cda-9fc8-86b01670de7d", 00:11:59.769 "strip_size_kb": 64, 00:11:59.769 "state": "configuring", 00:11:59.769 "raid_level": "concat", 00:11:59.769 "superblock": true, 00:11:59.769 "num_base_bdevs": 4, 00:11:59.769 "num_base_bdevs_discovered": 2, 00:11:59.769 "num_base_bdevs_operational": 4, 00:11:59.769 "base_bdevs_list": [ 00:11:59.769 { 00:11:59.769 "name": "BaseBdev1", 00:11:59.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.769 "is_configured": false, 00:11:59.769 "data_offset": 0, 00:11:59.769 "data_size": 0 00:11:59.769 }, 00:11:59.769 { 00:11:59.769 "name": null, 00:11:59.769 "uuid": "6d40c353-1fed-4a04-b60b-b8bf1712094d", 00:11:59.769 "is_configured": false, 00:11:59.769 "data_offset": 0, 00:11:59.769 "data_size": 63488 00:11:59.769 }, 00:11:59.769 { 00:11:59.769 "name": "BaseBdev3", 00:11:59.769 "uuid": "6bf20608-5de6-4d1c-ad18-029b62ce896e", 00:11:59.769 "is_configured": true, 00:11:59.769 "data_offset": 2048, 00:11:59.769 "data_size": 63488 00:11:59.769 }, 00:11:59.769 { 00:11:59.769 "name": "BaseBdev4", 00:11:59.769 "uuid": "6f597457-2c75-4fd8-8d94-d878df57d3da", 00:11:59.769 "is_configured": true, 00:11:59.769 "data_offset": 2048, 00:11:59.769 "data_size": 63488 00:11:59.769 } 00:11:59.769 ] 00:11:59.769 }' 00:11:59.769 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.769 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.029 [2024-09-30 12:29:11.871111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.029 BaseBdev1 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.029 [ 00:12:00.029 { 00:12:00.029 "name": "BaseBdev1", 00:12:00.029 "aliases": [ 00:12:00.029 "0d5a23b5-0422-4862-a03d-af0aa6db33bd" 00:12:00.029 ], 00:12:00.029 "product_name": "Malloc disk", 00:12:00.029 "block_size": 512, 00:12:00.029 "num_blocks": 65536, 00:12:00.029 "uuid": "0d5a23b5-0422-4862-a03d-af0aa6db33bd", 00:12:00.029 "assigned_rate_limits": { 00:12:00.029 "rw_ios_per_sec": 0, 00:12:00.029 "rw_mbytes_per_sec": 0, 00:12:00.029 "r_mbytes_per_sec": 0, 00:12:00.029 "w_mbytes_per_sec": 0 00:12:00.029 }, 00:12:00.029 "claimed": true, 00:12:00.029 "claim_type": "exclusive_write", 00:12:00.029 "zoned": false, 00:12:00.029 "supported_io_types": { 00:12:00.029 "read": true, 00:12:00.029 "write": true, 00:12:00.029 "unmap": true, 00:12:00.029 "flush": true, 00:12:00.029 "reset": true, 00:12:00.029 "nvme_admin": false, 00:12:00.029 "nvme_io": false, 00:12:00.029 "nvme_io_md": false, 00:12:00.029 "write_zeroes": true, 00:12:00.029 "zcopy": true, 00:12:00.029 "get_zone_info": false, 00:12:00.029 "zone_management": false, 00:12:00.029 "zone_append": false, 00:12:00.029 "compare": false, 00:12:00.029 "compare_and_write": false, 00:12:00.029 "abort": true, 00:12:00.029 "seek_hole": false, 00:12:00.029 "seek_data": false, 00:12:00.029 "copy": true, 00:12:00.029 "nvme_iov_md": false 00:12:00.029 }, 00:12:00.029 "memory_domains": [ 00:12:00.029 { 00:12:00.029 "dma_device_id": "system", 00:12:00.029 "dma_device_type": 1 00:12:00.029 }, 00:12:00.029 { 00:12:00.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.029 "dma_device_type": 2 00:12:00.029 } 00:12:00.029 ], 00:12:00.029 "driver_specific": {} 00:12:00.029 } 00:12:00.029 ] 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.029 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.289 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.289 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.289 "name": "Existed_Raid", 00:12:00.289 "uuid": "4655042f-a091-4cda-9fc8-86b01670de7d", 00:12:00.289 "strip_size_kb": 64, 00:12:00.289 "state": "configuring", 00:12:00.289 "raid_level": "concat", 00:12:00.289 "superblock": true, 00:12:00.289 "num_base_bdevs": 4, 00:12:00.289 "num_base_bdevs_discovered": 3, 00:12:00.289 "num_base_bdevs_operational": 4, 00:12:00.289 "base_bdevs_list": [ 00:12:00.289 { 00:12:00.289 "name": "BaseBdev1", 00:12:00.289 "uuid": "0d5a23b5-0422-4862-a03d-af0aa6db33bd", 00:12:00.289 "is_configured": true, 00:12:00.289 "data_offset": 2048, 00:12:00.289 "data_size": 63488 00:12:00.289 }, 00:12:00.289 { 00:12:00.289 "name": null, 00:12:00.289 "uuid": "6d40c353-1fed-4a04-b60b-b8bf1712094d", 00:12:00.289 "is_configured": false, 00:12:00.289 "data_offset": 0, 00:12:00.289 "data_size": 63488 00:12:00.289 }, 00:12:00.289 { 00:12:00.289 "name": "BaseBdev3", 00:12:00.289 "uuid": "6bf20608-5de6-4d1c-ad18-029b62ce896e", 00:12:00.289 "is_configured": true, 00:12:00.289 "data_offset": 2048, 00:12:00.289 "data_size": 63488 00:12:00.289 }, 00:12:00.289 { 00:12:00.289 "name": "BaseBdev4", 00:12:00.289 "uuid": "6f597457-2c75-4fd8-8d94-d878df57d3da", 00:12:00.289 "is_configured": true, 00:12:00.289 "data_offset": 2048, 00:12:00.289 "data_size": 63488 00:12:00.289 } 00:12:00.289 ] 00:12:00.289 }' 00:12:00.289 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.289 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 [2024-09-30 12:29:12.426210] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.830 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.830 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.830 "name": "Existed_Raid", 00:12:00.830 "uuid": "4655042f-a091-4cda-9fc8-86b01670de7d", 00:12:00.830 "strip_size_kb": 64, 00:12:00.830 "state": "configuring", 00:12:00.830 "raid_level": "concat", 00:12:00.830 "superblock": true, 00:12:00.830 "num_base_bdevs": 4, 00:12:00.830 "num_base_bdevs_discovered": 2, 00:12:00.830 "num_base_bdevs_operational": 4, 00:12:00.830 "base_bdevs_list": [ 00:12:00.830 { 00:12:00.830 "name": "BaseBdev1", 00:12:00.830 "uuid": "0d5a23b5-0422-4862-a03d-af0aa6db33bd", 00:12:00.830 "is_configured": true, 00:12:00.830 "data_offset": 2048, 00:12:00.830 "data_size": 63488 00:12:00.830 }, 00:12:00.830 { 00:12:00.830 "name": null, 00:12:00.830 "uuid": "6d40c353-1fed-4a04-b60b-b8bf1712094d", 00:12:00.830 "is_configured": false, 00:12:00.830 "data_offset": 0, 00:12:00.830 "data_size": 63488 00:12:00.830 }, 00:12:00.830 { 00:12:00.830 "name": null, 00:12:00.830 "uuid": "6bf20608-5de6-4d1c-ad18-029b62ce896e", 00:12:00.830 "is_configured": false, 00:12:00.830 "data_offset": 0, 00:12:00.830 "data_size": 63488 00:12:00.830 }, 00:12:00.830 { 00:12:00.830 "name": "BaseBdev4", 00:12:00.830 "uuid": "6f597457-2c75-4fd8-8d94-d878df57d3da", 00:12:00.830 "is_configured": true, 00:12:00.830 "data_offset": 2048, 00:12:00.830 "data_size": 63488 00:12:00.830 } 00:12:00.830 ] 00:12:00.830 }' 00:12:00.830 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.830 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.100 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.100 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.100 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.100 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:01.100 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.101 [2024-09-30 12:29:12.937358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.101 "name": "Existed_Raid", 00:12:01.101 "uuid": "4655042f-a091-4cda-9fc8-86b01670de7d", 00:12:01.101 "strip_size_kb": 64, 00:12:01.101 "state": "configuring", 00:12:01.101 "raid_level": "concat", 00:12:01.101 "superblock": true, 00:12:01.101 "num_base_bdevs": 4, 00:12:01.101 "num_base_bdevs_discovered": 3, 00:12:01.101 "num_base_bdevs_operational": 4, 00:12:01.101 "base_bdevs_list": [ 00:12:01.101 { 00:12:01.101 "name": "BaseBdev1", 00:12:01.101 "uuid": "0d5a23b5-0422-4862-a03d-af0aa6db33bd", 00:12:01.101 "is_configured": true, 00:12:01.101 "data_offset": 2048, 00:12:01.101 "data_size": 63488 00:12:01.101 }, 00:12:01.101 { 00:12:01.101 "name": null, 00:12:01.101 "uuid": "6d40c353-1fed-4a04-b60b-b8bf1712094d", 00:12:01.101 "is_configured": false, 00:12:01.101 "data_offset": 0, 00:12:01.101 "data_size": 63488 00:12:01.101 }, 00:12:01.101 { 00:12:01.101 "name": "BaseBdev3", 00:12:01.101 "uuid": "6bf20608-5de6-4d1c-ad18-029b62ce896e", 00:12:01.101 "is_configured": true, 00:12:01.101 "data_offset": 2048, 00:12:01.101 "data_size": 63488 00:12:01.101 }, 00:12:01.101 { 00:12:01.101 "name": "BaseBdev4", 00:12:01.101 "uuid": "6f597457-2c75-4fd8-8d94-d878df57d3da", 00:12:01.101 "is_configured": true, 00:12:01.101 "data_offset": 2048, 00:12:01.101 "data_size": 63488 00:12:01.101 } 00:12:01.101 ] 00:12:01.101 }' 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.101 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.670 [2024-09-30 12:29:13.436516] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.670 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.671 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.671 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.671 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.671 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.671 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.671 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.671 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.671 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.930 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.930 "name": "Existed_Raid", 00:12:01.930 "uuid": "4655042f-a091-4cda-9fc8-86b01670de7d", 00:12:01.930 "strip_size_kb": 64, 00:12:01.930 "state": "configuring", 00:12:01.930 "raid_level": "concat", 00:12:01.930 "superblock": true, 00:12:01.930 "num_base_bdevs": 4, 00:12:01.930 "num_base_bdevs_discovered": 2, 00:12:01.930 "num_base_bdevs_operational": 4, 00:12:01.930 "base_bdevs_list": [ 00:12:01.930 { 00:12:01.930 "name": null, 00:12:01.930 "uuid": "0d5a23b5-0422-4862-a03d-af0aa6db33bd", 00:12:01.930 "is_configured": false, 00:12:01.930 "data_offset": 0, 00:12:01.931 "data_size": 63488 00:12:01.931 }, 00:12:01.931 { 00:12:01.931 "name": null, 00:12:01.931 "uuid": "6d40c353-1fed-4a04-b60b-b8bf1712094d", 00:12:01.931 "is_configured": false, 00:12:01.931 "data_offset": 0, 00:12:01.931 "data_size": 63488 00:12:01.931 }, 00:12:01.931 { 00:12:01.931 "name": "BaseBdev3", 00:12:01.931 "uuid": "6bf20608-5de6-4d1c-ad18-029b62ce896e", 00:12:01.931 "is_configured": true, 00:12:01.931 "data_offset": 2048, 00:12:01.931 "data_size": 63488 00:12:01.931 }, 00:12:01.931 { 00:12:01.931 "name": "BaseBdev4", 00:12:01.931 "uuid": "6f597457-2c75-4fd8-8d94-d878df57d3da", 00:12:01.931 "is_configured": true, 00:12:01.931 "data_offset": 2048, 00:12:01.931 "data_size": 63488 00:12:01.931 } 00:12:01.931 ] 00:12:01.931 }' 00:12:01.931 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.931 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.190 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.190 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.190 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.190 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:02.190 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.190 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:02.190 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:02.190 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.190 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.190 [2024-09-30 12:29:14.058645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.190 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.190 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.191 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.450 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.450 "name": "Existed_Raid", 00:12:02.450 "uuid": "4655042f-a091-4cda-9fc8-86b01670de7d", 00:12:02.450 "strip_size_kb": 64, 00:12:02.450 "state": "configuring", 00:12:02.450 "raid_level": "concat", 00:12:02.450 "superblock": true, 00:12:02.450 "num_base_bdevs": 4, 00:12:02.450 "num_base_bdevs_discovered": 3, 00:12:02.450 "num_base_bdevs_operational": 4, 00:12:02.450 "base_bdevs_list": [ 00:12:02.450 { 00:12:02.450 "name": null, 00:12:02.450 "uuid": "0d5a23b5-0422-4862-a03d-af0aa6db33bd", 00:12:02.450 "is_configured": false, 00:12:02.450 "data_offset": 0, 00:12:02.450 "data_size": 63488 00:12:02.450 }, 00:12:02.450 { 00:12:02.450 "name": "BaseBdev2", 00:12:02.450 "uuid": "6d40c353-1fed-4a04-b60b-b8bf1712094d", 00:12:02.450 "is_configured": true, 00:12:02.450 "data_offset": 2048, 00:12:02.450 "data_size": 63488 00:12:02.450 }, 00:12:02.450 { 00:12:02.450 "name": "BaseBdev3", 00:12:02.451 "uuid": "6bf20608-5de6-4d1c-ad18-029b62ce896e", 00:12:02.451 "is_configured": true, 00:12:02.451 "data_offset": 2048, 00:12:02.451 "data_size": 63488 00:12:02.451 }, 00:12:02.451 { 00:12:02.451 "name": "BaseBdev4", 00:12:02.451 "uuid": "6f597457-2c75-4fd8-8d94-d878df57d3da", 00:12:02.451 "is_configured": true, 00:12:02.451 "data_offset": 2048, 00:12:02.451 "data_size": 63488 00:12:02.451 } 00:12:02.451 ] 00:12:02.451 }' 00:12:02.451 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.451 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.710 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.710 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.710 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.710 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:02.710 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.710 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:02.710 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:02.710 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.710 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.710 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.710 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.711 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0d5a23b5-0422-4862-a03d-af0aa6db33bd 00:12:02.711 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.711 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.970 [2024-09-30 12:29:14.639754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:02.970 [2024-09-30 12:29:14.640096] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:02.970 [2024-09-30 12:29:14.640144] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:02.970 [2024-09-30 12:29:14.640453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:02.970 [2024-09-30 12:29:14.640635] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:02.970 [2024-09-30 12:29:14.640676] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:02.970 NewBaseBdev 00:12:02.970 [2024-09-30 12:29:14.640891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.970 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.970 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.971 [ 00:12:02.971 { 00:12:02.971 "name": "NewBaseBdev", 00:12:02.971 "aliases": [ 00:12:02.971 "0d5a23b5-0422-4862-a03d-af0aa6db33bd" 00:12:02.971 ], 00:12:02.971 "product_name": "Malloc disk", 00:12:02.971 "block_size": 512, 00:12:02.971 "num_blocks": 65536, 00:12:02.971 "uuid": "0d5a23b5-0422-4862-a03d-af0aa6db33bd", 00:12:02.971 "assigned_rate_limits": { 00:12:02.971 "rw_ios_per_sec": 0, 00:12:02.971 "rw_mbytes_per_sec": 0, 00:12:02.971 "r_mbytes_per_sec": 0, 00:12:02.971 "w_mbytes_per_sec": 0 00:12:02.971 }, 00:12:02.971 "claimed": true, 00:12:02.971 "claim_type": "exclusive_write", 00:12:02.971 "zoned": false, 00:12:02.971 "supported_io_types": { 00:12:02.971 "read": true, 00:12:02.971 "write": true, 00:12:02.971 "unmap": true, 00:12:02.971 "flush": true, 00:12:02.971 "reset": true, 00:12:02.971 "nvme_admin": false, 00:12:02.971 "nvme_io": false, 00:12:02.971 "nvme_io_md": false, 00:12:02.971 "write_zeroes": true, 00:12:02.971 "zcopy": true, 00:12:02.971 "get_zone_info": false, 00:12:02.971 "zone_management": false, 00:12:02.971 "zone_append": false, 00:12:02.971 "compare": false, 00:12:02.971 "compare_and_write": false, 00:12:02.971 "abort": true, 00:12:02.971 "seek_hole": false, 00:12:02.971 "seek_data": false, 00:12:02.971 "copy": true, 00:12:02.971 "nvme_iov_md": false 00:12:02.971 }, 00:12:02.971 "memory_domains": [ 00:12:02.971 { 00:12:02.971 "dma_device_id": "system", 00:12:02.971 "dma_device_type": 1 00:12:02.971 }, 00:12:02.971 { 00:12:02.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.971 "dma_device_type": 2 00:12:02.971 } 00:12:02.971 ], 00:12:02.971 "driver_specific": {} 00:12:02.971 } 00:12:02.971 ] 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.971 "name": "Existed_Raid", 00:12:02.971 "uuid": "4655042f-a091-4cda-9fc8-86b01670de7d", 00:12:02.971 "strip_size_kb": 64, 00:12:02.971 "state": "online", 00:12:02.971 "raid_level": "concat", 00:12:02.971 "superblock": true, 00:12:02.971 "num_base_bdevs": 4, 00:12:02.971 "num_base_bdevs_discovered": 4, 00:12:02.971 "num_base_bdevs_operational": 4, 00:12:02.971 "base_bdevs_list": [ 00:12:02.971 { 00:12:02.971 "name": "NewBaseBdev", 00:12:02.971 "uuid": "0d5a23b5-0422-4862-a03d-af0aa6db33bd", 00:12:02.971 "is_configured": true, 00:12:02.971 "data_offset": 2048, 00:12:02.971 "data_size": 63488 00:12:02.971 }, 00:12:02.971 { 00:12:02.971 "name": "BaseBdev2", 00:12:02.971 "uuid": "6d40c353-1fed-4a04-b60b-b8bf1712094d", 00:12:02.971 "is_configured": true, 00:12:02.971 "data_offset": 2048, 00:12:02.971 "data_size": 63488 00:12:02.971 }, 00:12:02.971 { 00:12:02.971 "name": "BaseBdev3", 00:12:02.971 "uuid": "6bf20608-5de6-4d1c-ad18-029b62ce896e", 00:12:02.971 "is_configured": true, 00:12:02.971 "data_offset": 2048, 00:12:02.971 "data_size": 63488 00:12:02.971 }, 00:12:02.971 { 00:12:02.971 "name": "BaseBdev4", 00:12:02.971 "uuid": "6f597457-2c75-4fd8-8d94-d878df57d3da", 00:12:02.971 "is_configured": true, 00:12:02.971 "data_offset": 2048, 00:12:02.971 "data_size": 63488 00:12:02.971 } 00:12:02.971 ] 00:12:02.971 }' 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.971 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.231 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.231 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:03.231 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.231 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.231 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.231 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.231 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.231 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:03.231 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.231 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.231 [2024-09-30 12:29:15.099368] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.231 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.491 "name": "Existed_Raid", 00:12:03.491 "aliases": [ 00:12:03.491 "4655042f-a091-4cda-9fc8-86b01670de7d" 00:12:03.491 ], 00:12:03.491 "product_name": "Raid Volume", 00:12:03.491 "block_size": 512, 00:12:03.491 "num_blocks": 253952, 00:12:03.491 "uuid": "4655042f-a091-4cda-9fc8-86b01670de7d", 00:12:03.491 "assigned_rate_limits": { 00:12:03.491 "rw_ios_per_sec": 0, 00:12:03.491 "rw_mbytes_per_sec": 0, 00:12:03.491 "r_mbytes_per_sec": 0, 00:12:03.491 "w_mbytes_per_sec": 0 00:12:03.491 }, 00:12:03.491 "claimed": false, 00:12:03.491 "zoned": false, 00:12:03.491 "supported_io_types": { 00:12:03.491 "read": true, 00:12:03.491 "write": true, 00:12:03.491 "unmap": true, 00:12:03.491 "flush": true, 00:12:03.491 "reset": true, 00:12:03.491 "nvme_admin": false, 00:12:03.491 "nvme_io": false, 00:12:03.491 "nvme_io_md": false, 00:12:03.491 "write_zeroes": true, 00:12:03.491 "zcopy": false, 00:12:03.491 "get_zone_info": false, 00:12:03.491 "zone_management": false, 00:12:03.491 "zone_append": false, 00:12:03.491 "compare": false, 00:12:03.491 "compare_and_write": false, 00:12:03.491 "abort": false, 00:12:03.491 "seek_hole": false, 00:12:03.491 "seek_data": false, 00:12:03.491 "copy": false, 00:12:03.491 "nvme_iov_md": false 00:12:03.491 }, 00:12:03.491 "memory_domains": [ 00:12:03.491 { 00:12:03.491 "dma_device_id": "system", 00:12:03.491 "dma_device_type": 1 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.491 "dma_device_type": 2 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "dma_device_id": "system", 00:12:03.491 "dma_device_type": 1 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.491 "dma_device_type": 2 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "dma_device_id": "system", 00:12:03.491 "dma_device_type": 1 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.491 "dma_device_type": 2 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "dma_device_id": "system", 00:12:03.491 "dma_device_type": 1 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.491 "dma_device_type": 2 00:12:03.491 } 00:12:03.491 ], 00:12:03.491 "driver_specific": { 00:12:03.491 "raid": { 00:12:03.491 "uuid": "4655042f-a091-4cda-9fc8-86b01670de7d", 00:12:03.491 "strip_size_kb": 64, 00:12:03.491 "state": "online", 00:12:03.491 "raid_level": "concat", 00:12:03.491 "superblock": true, 00:12:03.491 "num_base_bdevs": 4, 00:12:03.491 "num_base_bdevs_discovered": 4, 00:12:03.491 "num_base_bdevs_operational": 4, 00:12:03.491 "base_bdevs_list": [ 00:12:03.491 { 00:12:03.491 "name": "NewBaseBdev", 00:12:03.491 "uuid": "0d5a23b5-0422-4862-a03d-af0aa6db33bd", 00:12:03.491 "is_configured": true, 00:12:03.491 "data_offset": 2048, 00:12:03.491 "data_size": 63488 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "name": "BaseBdev2", 00:12:03.491 "uuid": "6d40c353-1fed-4a04-b60b-b8bf1712094d", 00:12:03.491 "is_configured": true, 00:12:03.491 "data_offset": 2048, 00:12:03.491 "data_size": 63488 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "name": "BaseBdev3", 00:12:03.491 "uuid": "6bf20608-5de6-4d1c-ad18-029b62ce896e", 00:12:03.491 "is_configured": true, 00:12:03.491 "data_offset": 2048, 00:12:03.491 "data_size": 63488 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "name": "BaseBdev4", 00:12:03.491 "uuid": "6f597457-2c75-4fd8-8d94-d878df57d3da", 00:12:03.491 "is_configured": true, 00:12:03.491 "data_offset": 2048, 00:12:03.491 "data_size": 63488 00:12:03.491 } 00:12:03.491 ] 00:12:03.491 } 00:12:03.491 } 00:12:03.491 }' 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:03.491 BaseBdev2 00:12:03.491 BaseBdev3 00:12:03.491 BaseBdev4' 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.491 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.492 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.492 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.492 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.492 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.492 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:03.492 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.492 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.492 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.492 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.752 [2024-09-30 12:29:15.406523] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.752 [2024-09-30 12:29:15.406553] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.752 [2024-09-30 12:29:15.406623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.752 [2024-09-30 12:29:15.406691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.752 [2024-09-30 12:29:15.406701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71822 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 71822 ']' 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 71822 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71822 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71822' 00:12:03.752 killing process with pid 71822 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 71822 00:12:03.752 [2024-09-30 12:29:15.451694] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.752 12:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 71822 00:12:04.012 [2024-09-30 12:29:15.868055] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:05.394 12:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:05.394 00:12:05.394 real 0m11.823s 00:12:05.394 user 0m18.393s 00:12:05.394 sys 0m2.236s 00:12:05.394 12:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.394 12:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.394 ************************************ 00:12:05.394 END TEST raid_state_function_test_sb 00:12:05.394 ************************************ 00:12:05.394 12:29:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:05.394 12:29:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:05.394 12:29:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.394 12:29:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:05.394 ************************************ 00:12:05.394 START TEST raid_superblock_test 00:12:05.394 ************************************ 00:12:05.394 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72500 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72500 00:12:05.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72500 ']' 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:05.395 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.654 [2024-09-30 12:29:17.358381] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:05.654 [2024-09-30 12:29:17.358615] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72500 ] 00:12:05.654 [2024-09-30 12:29:17.526499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.915 [2024-09-30 12:29:17.760501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.174 [2024-09-30 12:29:17.992001] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.174 [2024-09-30 12:29:17.992125] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:06.434 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.435 malloc1 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.435 [2024-09-30 12:29:18.229179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:06.435 [2024-09-30 12:29:18.229292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.435 [2024-09-30 12:29:18.229334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:06.435 [2024-09-30 12:29:18.229370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.435 [2024-09-30 12:29:18.231743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.435 [2024-09-30 12:29:18.231824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:06.435 pt1 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.435 malloc2 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.435 [2024-09-30 12:29:18.315229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:06.435 [2024-09-30 12:29:18.315340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.435 [2024-09-30 12:29:18.315403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:06.435 [2024-09-30 12:29:18.315437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.435 [2024-09-30 12:29:18.317905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.435 [2024-09-30 12:29:18.317975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:06.435 pt2 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.435 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.695 malloc3 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.695 [2024-09-30 12:29:18.381125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:06.695 [2024-09-30 12:29:18.381217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.695 [2024-09-30 12:29:18.381255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:06.695 [2024-09-30 12:29:18.381283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.695 [2024-09-30 12:29:18.383708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.695 [2024-09-30 12:29:18.383794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:06.695 pt3 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.695 malloc4 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.695 [2024-09-30 12:29:18.442110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:06.695 [2024-09-30 12:29:18.442202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.695 [2024-09-30 12:29:18.442239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:06.695 [2024-09-30 12:29:18.442266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.695 [2024-09-30 12:29:18.444590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.695 [2024-09-30 12:29:18.444657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:06.695 pt4 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.695 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.696 [2024-09-30 12:29:18.454154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:06.696 [2024-09-30 12:29:18.456183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:06.696 [2024-09-30 12:29:18.456285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:06.696 [2024-09-30 12:29:18.456366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:06.696 [2024-09-30 12:29:18.456596] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:06.696 [2024-09-30 12:29:18.456647] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:06.696 [2024-09-30 12:29:18.456930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:06.696 [2024-09-30 12:29:18.457121] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:06.696 [2024-09-30 12:29:18.457168] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:06.696 [2024-09-30 12:29:18.457350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.696 "name": "raid_bdev1", 00:12:06.696 "uuid": "862b4304-e8c1-426f-84c3-0aade1e27804", 00:12:06.696 "strip_size_kb": 64, 00:12:06.696 "state": "online", 00:12:06.696 "raid_level": "concat", 00:12:06.696 "superblock": true, 00:12:06.696 "num_base_bdevs": 4, 00:12:06.696 "num_base_bdevs_discovered": 4, 00:12:06.696 "num_base_bdevs_operational": 4, 00:12:06.696 "base_bdevs_list": [ 00:12:06.696 { 00:12:06.696 "name": "pt1", 00:12:06.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.696 "is_configured": true, 00:12:06.696 "data_offset": 2048, 00:12:06.696 "data_size": 63488 00:12:06.696 }, 00:12:06.696 { 00:12:06.696 "name": "pt2", 00:12:06.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.696 "is_configured": true, 00:12:06.696 "data_offset": 2048, 00:12:06.696 "data_size": 63488 00:12:06.696 }, 00:12:06.696 { 00:12:06.696 "name": "pt3", 00:12:06.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.696 "is_configured": true, 00:12:06.696 "data_offset": 2048, 00:12:06.696 "data_size": 63488 00:12:06.696 }, 00:12:06.696 { 00:12:06.696 "name": "pt4", 00:12:06.696 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.696 "is_configured": true, 00:12:06.696 "data_offset": 2048, 00:12:06.696 "data_size": 63488 00:12:06.696 } 00:12:06.696 ] 00:12:06.696 }' 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.696 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.266 [2024-09-30 12:29:18.901656] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:07.266 "name": "raid_bdev1", 00:12:07.266 "aliases": [ 00:12:07.266 "862b4304-e8c1-426f-84c3-0aade1e27804" 00:12:07.266 ], 00:12:07.266 "product_name": "Raid Volume", 00:12:07.266 "block_size": 512, 00:12:07.266 "num_blocks": 253952, 00:12:07.266 "uuid": "862b4304-e8c1-426f-84c3-0aade1e27804", 00:12:07.266 "assigned_rate_limits": { 00:12:07.266 "rw_ios_per_sec": 0, 00:12:07.266 "rw_mbytes_per_sec": 0, 00:12:07.266 "r_mbytes_per_sec": 0, 00:12:07.266 "w_mbytes_per_sec": 0 00:12:07.266 }, 00:12:07.266 "claimed": false, 00:12:07.266 "zoned": false, 00:12:07.266 "supported_io_types": { 00:12:07.266 "read": true, 00:12:07.266 "write": true, 00:12:07.266 "unmap": true, 00:12:07.266 "flush": true, 00:12:07.266 "reset": true, 00:12:07.266 "nvme_admin": false, 00:12:07.266 "nvme_io": false, 00:12:07.266 "nvme_io_md": false, 00:12:07.266 "write_zeroes": true, 00:12:07.266 "zcopy": false, 00:12:07.266 "get_zone_info": false, 00:12:07.266 "zone_management": false, 00:12:07.266 "zone_append": false, 00:12:07.266 "compare": false, 00:12:07.266 "compare_and_write": false, 00:12:07.266 "abort": false, 00:12:07.266 "seek_hole": false, 00:12:07.266 "seek_data": false, 00:12:07.266 "copy": false, 00:12:07.266 "nvme_iov_md": false 00:12:07.266 }, 00:12:07.266 "memory_domains": [ 00:12:07.266 { 00:12:07.266 "dma_device_id": "system", 00:12:07.266 "dma_device_type": 1 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.266 "dma_device_type": 2 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "dma_device_id": "system", 00:12:07.266 "dma_device_type": 1 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.266 "dma_device_type": 2 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "dma_device_id": "system", 00:12:07.266 "dma_device_type": 1 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.266 "dma_device_type": 2 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "dma_device_id": "system", 00:12:07.266 "dma_device_type": 1 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.266 "dma_device_type": 2 00:12:07.266 } 00:12:07.266 ], 00:12:07.266 "driver_specific": { 00:12:07.266 "raid": { 00:12:07.266 "uuid": "862b4304-e8c1-426f-84c3-0aade1e27804", 00:12:07.266 "strip_size_kb": 64, 00:12:07.266 "state": "online", 00:12:07.266 "raid_level": "concat", 00:12:07.266 "superblock": true, 00:12:07.266 "num_base_bdevs": 4, 00:12:07.266 "num_base_bdevs_discovered": 4, 00:12:07.266 "num_base_bdevs_operational": 4, 00:12:07.266 "base_bdevs_list": [ 00:12:07.266 { 00:12:07.266 "name": "pt1", 00:12:07.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.266 "is_configured": true, 00:12:07.266 "data_offset": 2048, 00:12:07.266 "data_size": 63488 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "name": "pt2", 00:12:07.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.266 "is_configured": true, 00:12:07.266 "data_offset": 2048, 00:12:07.266 "data_size": 63488 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "name": "pt3", 00:12:07.266 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.266 "is_configured": true, 00:12:07.266 "data_offset": 2048, 00:12:07.266 "data_size": 63488 00:12:07.266 }, 00:12:07.266 { 00:12:07.266 "name": "pt4", 00:12:07.266 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.266 "is_configured": true, 00:12:07.266 "data_offset": 2048, 00:12:07.266 "data_size": 63488 00:12:07.266 } 00:12:07.266 ] 00:12:07.266 } 00:12:07.266 } 00:12:07.266 }' 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:07.266 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:07.266 pt2 00:12:07.266 pt3 00:12:07.266 pt4' 00:12:07.267 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.267 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.526 [2024-09-30 12:29:19.221038] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=862b4304-e8c1-426f-84c3-0aade1e27804 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 862b4304-e8c1-426f-84c3-0aade1e27804 ']' 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.526 [2024-09-30 12:29:19.268662] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.526 [2024-09-30 12:29:19.268733] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.526 [2024-09-30 12:29:19.268825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.526 [2024-09-30 12:29:19.268902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.526 [2024-09-30 12:29:19.268941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:07.526 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.527 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:07.527 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.527 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.527 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.527 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.787 [2024-09-30 12:29:19.424418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:07.787 [2024-09-30 12:29:19.426635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:07.787 [2024-09-30 12:29:19.426735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:07.787 [2024-09-30 12:29:19.426813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:07.787 [2024-09-30 12:29:19.426891] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:07.787 [2024-09-30 12:29:19.426973] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:07.787 [2024-09-30 12:29:19.427027] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:07.787 [2024-09-30 12:29:19.427078] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:07.787 [2024-09-30 12:29:19.427131] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.787 [2024-09-30 12:29:19.427165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:07.787 request: 00:12:07.787 { 00:12:07.787 "name": "raid_bdev1", 00:12:07.787 "raid_level": "concat", 00:12:07.787 "base_bdevs": [ 00:12:07.787 "malloc1", 00:12:07.787 "malloc2", 00:12:07.787 "malloc3", 00:12:07.787 "malloc4" 00:12:07.787 ], 00:12:07.787 "strip_size_kb": 64, 00:12:07.787 "superblock": false, 00:12:07.787 "method": "bdev_raid_create", 00:12:07.787 "req_id": 1 00:12:07.787 } 00:12:07.787 Got JSON-RPC error response 00:12:07.787 response: 00:12:07.787 { 00:12:07.787 "code": -17, 00:12:07.787 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:07.787 } 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.787 [2024-09-30 12:29:19.492274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:07.787 [2024-09-30 12:29:19.492356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.787 [2024-09-30 12:29:19.492387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:07.787 [2024-09-30 12:29:19.492400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.787 [2024-09-30 12:29:19.494781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.787 [2024-09-30 12:29:19.494816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:07.787 [2024-09-30 12:29:19.494882] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:07.787 [2024-09-30 12:29:19.494937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:07.787 pt1 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.787 "name": "raid_bdev1", 00:12:07.787 "uuid": "862b4304-e8c1-426f-84c3-0aade1e27804", 00:12:07.787 "strip_size_kb": 64, 00:12:07.787 "state": "configuring", 00:12:07.787 "raid_level": "concat", 00:12:07.787 "superblock": true, 00:12:07.787 "num_base_bdevs": 4, 00:12:07.787 "num_base_bdevs_discovered": 1, 00:12:07.787 "num_base_bdevs_operational": 4, 00:12:07.787 "base_bdevs_list": [ 00:12:07.787 { 00:12:07.787 "name": "pt1", 00:12:07.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.787 "is_configured": true, 00:12:07.787 "data_offset": 2048, 00:12:07.787 "data_size": 63488 00:12:07.787 }, 00:12:07.787 { 00:12:07.787 "name": null, 00:12:07.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.787 "is_configured": false, 00:12:07.787 "data_offset": 2048, 00:12:07.787 "data_size": 63488 00:12:07.787 }, 00:12:07.787 { 00:12:07.787 "name": null, 00:12:07.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.787 "is_configured": false, 00:12:07.787 "data_offset": 2048, 00:12:07.787 "data_size": 63488 00:12:07.787 }, 00:12:07.787 { 00:12:07.787 "name": null, 00:12:07.787 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.787 "is_configured": false, 00:12:07.787 "data_offset": 2048, 00:12:07.787 "data_size": 63488 00:12:07.787 } 00:12:07.787 ] 00:12:07.787 }' 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.787 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.047 [2024-09-30 12:29:19.919533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:08.047 [2024-09-30 12:29:19.919621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.047 [2024-09-30 12:29:19.919653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:08.047 [2024-09-30 12:29:19.919682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.047 [2024-09-30 12:29:19.920123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.047 [2024-09-30 12:29:19.920188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:08.047 [2024-09-30 12:29:19.920281] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:08.047 [2024-09-30 12:29:19.920332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:08.047 pt2 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.047 [2024-09-30 12:29:19.931534] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.047 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.307 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.307 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.307 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.307 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.307 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.307 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.307 "name": "raid_bdev1", 00:12:08.307 "uuid": "862b4304-e8c1-426f-84c3-0aade1e27804", 00:12:08.307 "strip_size_kb": 64, 00:12:08.307 "state": "configuring", 00:12:08.307 "raid_level": "concat", 00:12:08.307 "superblock": true, 00:12:08.307 "num_base_bdevs": 4, 00:12:08.307 "num_base_bdevs_discovered": 1, 00:12:08.307 "num_base_bdevs_operational": 4, 00:12:08.307 "base_bdevs_list": [ 00:12:08.307 { 00:12:08.307 "name": "pt1", 00:12:08.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.307 "is_configured": true, 00:12:08.307 "data_offset": 2048, 00:12:08.307 "data_size": 63488 00:12:08.307 }, 00:12:08.307 { 00:12:08.307 "name": null, 00:12:08.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.307 "is_configured": false, 00:12:08.307 "data_offset": 0, 00:12:08.307 "data_size": 63488 00:12:08.307 }, 00:12:08.307 { 00:12:08.307 "name": null, 00:12:08.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.307 "is_configured": false, 00:12:08.307 "data_offset": 2048, 00:12:08.307 "data_size": 63488 00:12:08.307 }, 00:12:08.307 { 00:12:08.307 "name": null, 00:12:08.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.307 "is_configured": false, 00:12:08.307 "data_offset": 2048, 00:12:08.307 "data_size": 63488 00:12:08.307 } 00:12:08.307 ] 00:12:08.307 }' 00:12:08.307 12:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.307 12:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.568 [2024-09-30 12:29:20.363046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:08.568 [2024-09-30 12:29:20.363100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.568 [2024-09-30 12:29:20.363119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:08.568 [2024-09-30 12:29:20.363127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.568 [2024-09-30 12:29:20.363589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.568 [2024-09-30 12:29:20.363622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:08.568 [2024-09-30 12:29:20.363699] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:08.568 [2024-09-30 12:29:20.363727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:08.568 pt2 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.568 [2024-09-30 12:29:20.375028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:08.568 [2024-09-30 12:29:20.375111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.568 [2024-09-30 12:29:20.375152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:08.568 [2024-09-30 12:29:20.375210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.568 [2024-09-30 12:29:20.375623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.568 [2024-09-30 12:29:20.375683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:08.568 [2024-09-30 12:29:20.375779] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:08.568 [2024-09-30 12:29:20.375827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:08.568 pt3 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.568 [2024-09-30 12:29:20.386983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:08.568 [2024-09-30 12:29:20.387061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.568 [2024-09-30 12:29:20.387095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:08.568 [2024-09-30 12:29:20.387126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.568 [2024-09-30 12:29:20.387538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.568 [2024-09-30 12:29:20.387598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:08.568 [2024-09-30 12:29:20.387687] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:08.568 [2024-09-30 12:29:20.387751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:08.568 [2024-09-30 12:29:20.387930] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:08.568 [2024-09-30 12:29:20.387967] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:08.568 [2024-09-30 12:29:20.388242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:08.568 [2024-09-30 12:29:20.388431] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:08.568 [2024-09-30 12:29:20.388478] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:08.568 [2024-09-30 12:29:20.388617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.568 pt4 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.568 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.568 "name": "raid_bdev1", 00:12:08.568 "uuid": "862b4304-e8c1-426f-84c3-0aade1e27804", 00:12:08.568 "strip_size_kb": 64, 00:12:08.568 "state": "online", 00:12:08.568 "raid_level": "concat", 00:12:08.569 "superblock": true, 00:12:08.569 "num_base_bdevs": 4, 00:12:08.569 "num_base_bdevs_discovered": 4, 00:12:08.569 "num_base_bdevs_operational": 4, 00:12:08.569 "base_bdevs_list": [ 00:12:08.569 { 00:12:08.569 "name": "pt1", 00:12:08.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.569 "is_configured": true, 00:12:08.569 "data_offset": 2048, 00:12:08.569 "data_size": 63488 00:12:08.569 }, 00:12:08.569 { 00:12:08.569 "name": "pt2", 00:12:08.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.569 "is_configured": true, 00:12:08.569 "data_offset": 2048, 00:12:08.569 "data_size": 63488 00:12:08.569 }, 00:12:08.569 { 00:12:08.569 "name": "pt3", 00:12:08.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.569 "is_configured": true, 00:12:08.569 "data_offset": 2048, 00:12:08.569 "data_size": 63488 00:12:08.569 }, 00:12:08.569 { 00:12:08.569 "name": "pt4", 00:12:08.569 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.569 "is_configured": true, 00:12:08.569 "data_offset": 2048, 00:12:08.569 "data_size": 63488 00:12:08.569 } 00:12:08.569 ] 00:12:08.569 }' 00:12:08.569 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.569 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.138 [2024-09-30 12:29:20.838550] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.138 "name": "raid_bdev1", 00:12:09.138 "aliases": [ 00:12:09.138 "862b4304-e8c1-426f-84c3-0aade1e27804" 00:12:09.138 ], 00:12:09.138 "product_name": "Raid Volume", 00:12:09.138 "block_size": 512, 00:12:09.138 "num_blocks": 253952, 00:12:09.138 "uuid": "862b4304-e8c1-426f-84c3-0aade1e27804", 00:12:09.138 "assigned_rate_limits": { 00:12:09.138 "rw_ios_per_sec": 0, 00:12:09.138 "rw_mbytes_per_sec": 0, 00:12:09.138 "r_mbytes_per_sec": 0, 00:12:09.138 "w_mbytes_per_sec": 0 00:12:09.138 }, 00:12:09.138 "claimed": false, 00:12:09.138 "zoned": false, 00:12:09.138 "supported_io_types": { 00:12:09.138 "read": true, 00:12:09.138 "write": true, 00:12:09.138 "unmap": true, 00:12:09.138 "flush": true, 00:12:09.138 "reset": true, 00:12:09.138 "nvme_admin": false, 00:12:09.138 "nvme_io": false, 00:12:09.138 "nvme_io_md": false, 00:12:09.138 "write_zeroes": true, 00:12:09.138 "zcopy": false, 00:12:09.138 "get_zone_info": false, 00:12:09.138 "zone_management": false, 00:12:09.138 "zone_append": false, 00:12:09.138 "compare": false, 00:12:09.138 "compare_and_write": false, 00:12:09.138 "abort": false, 00:12:09.138 "seek_hole": false, 00:12:09.138 "seek_data": false, 00:12:09.138 "copy": false, 00:12:09.138 "nvme_iov_md": false 00:12:09.138 }, 00:12:09.138 "memory_domains": [ 00:12:09.138 { 00:12:09.138 "dma_device_id": "system", 00:12:09.138 "dma_device_type": 1 00:12:09.138 }, 00:12:09.138 { 00:12:09.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.138 "dma_device_type": 2 00:12:09.138 }, 00:12:09.138 { 00:12:09.138 "dma_device_id": "system", 00:12:09.138 "dma_device_type": 1 00:12:09.138 }, 00:12:09.138 { 00:12:09.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.138 "dma_device_type": 2 00:12:09.138 }, 00:12:09.138 { 00:12:09.138 "dma_device_id": "system", 00:12:09.138 "dma_device_type": 1 00:12:09.138 }, 00:12:09.138 { 00:12:09.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.138 "dma_device_type": 2 00:12:09.138 }, 00:12:09.138 { 00:12:09.138 "dma_device_id": "system", 00:12:09.138 "dma_device_type": 1 00:12:09.138 }, 00:12:09.138 { 00:12:09.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.138 "dma_device_type": 2 00:12:09.138 } 00:12:09.138 ], 00:12:09.138 "driver_specific": { 00:12:09.138 "raid": { 00:12:09.138 "uuid": "862b4304-e8c1-426f-84c3-0aade1e27804", 00:12:09.138 "strip_size_kb": 64, 00:12:09.138 "state": "online", 00:12:09.138 "raid_level": "concat", 00:12:09.138 "superblock": true, 00:12:09.138 "num_base_bdevs": 4, 00:12:09.138 "num_base_bdevs_discovered": 4, 00:12:09.138 "num_base_bdevs_operational": 4, 00:12:09.138 "base_bdevs_list": [ 00:12:09.138 { 00:12:09.138 "name": "pt1", 00:12:09.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:09.138 "is_configured": true, 00:12:09.138 "data_offset": 2048, 00:12:09.138 "data_size": 63488 00:12:09.138 }, 00:12:09.138 { 00:12:09.138 "name": "pt2", 00:12:09.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.138 "is_configured": true, 00:12:09.138 "data_offset": 2048, 00:12:09.138 "data_size": 63488 00:12:09.138 }, 00:12:09.138 { 00:12:09.138 "name": "pt3", 00:12:09.138 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.138 "is_configured": true, 00:12:09.138 "data_offset": 2048, 00:12:09.138 "data_size": 63488 00:12:09.138 }, 00:12:09.138 { 00:12:09.138 "name": "pt4", 00:12:09.138 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:09.138 "is_configured": true, 00:12:09.138 "data_offset": 2048, 00:12:09.138 "data_size": 63488 00:12:09.138 } 00:12:09.138 ] 00:12:09.138 } 00:12:09.138 } 00:12:09.138 }' 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:09.138 pt2 00:12:09.138 pt3 00:12:09.138 pt4' 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.138 12:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.138 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:09.398 [2024-09-30 12:29:21.145959] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 862b4304-e8c1-426f-84c3-0aade1e27804 '!=' 862b4304-e8c1-426f-84c3-0aade1e27804 ']' 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72500 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72500 ']' 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72500 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.398 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72500 00:12:09.399 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.399 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.399 killing process with pid 72500 00:12:09.399 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72500' 00:12:09.399 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72500 00:12:09.399 [2024-09-30 12:29:21.232320] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.399 [2024-09-30 12:29:21.232396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.399 [2024-09-30 12:29:21.232464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.399 [2024-09-30 12:29:21.232472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:09.399 12:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72500 00:12:09.968 [2024-09-30 12:29:21.648212] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.347 12:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:11.347 00:12:11.347 real 0m5.702s 00:12:11.347 user 0m7.874s 00:12:11.347 sys 0m1.087s 00:12:11.347 ************************************ 00:12:11.347 END TEST raid_superblock_test 00:12:11.347 ************************************ 00:12:11.347 12:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.347 12:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.347 12:29:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:11.347 12:29:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:11.347 12:29:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.347 12:29:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.347 ************************************ 00:12:11.347 START TEST raid_read_error_test 00:12:11.347 ************************************ 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dYcG9IvJut 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72763 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72763 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72763 ']' 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.347 12:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.347 [2024-09-30 12:29:23.169818] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:11.347 [2024-09-30 12:29:23.170008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72763 ] 00:12:11.606 [2024-09-30 12:29:23.334231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.865 [2024-09-30 12:29:23.574585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.124 [2024-09-30 12:29:23.797469] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.124 [2024-09-30 12:29:23.797507] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.124 12:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:12.124 12:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:12.124 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.124 12:29:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:12.124 12:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.124 12:29:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.124 BaseBdev1_malloc 00:12:12.124 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.124 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:12.124 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.124 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.124 true 00:12:12.124 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.124 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:12.124 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.383 [2024-09-30 12:29:24.025640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:12.383 [2024-09-30 12:29:24.025762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.383 [2024-09-30 12:29:24.025784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:12.383 [2024-09-30 12:29:24.025795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.383 [2024-09-30 12:29:24.028145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.383 [2024-09-30 12:29:24.028184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:12.383 BaseBdev1 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.383 BaseBdev2_malloc 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.383 true 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.383 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.383 [2024-09-30 12:29:24.128310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:12.383 [2024-09-30 12:29:24.128410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.383 [2024-09-30 12:29:24.128445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:12.383 [2024-09-30 12:29:24.128482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.383 [2024-09-30 12:29:24.130779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.384 [2024-09-30 12:29:24.130849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:12.384 BaseBdev2 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.384 BaseBdev3_malloc 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.384 true 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.384 [2024-09-30 12:29:24.200959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:12.384 [2024-09-30 12:29:24.201047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.384 [2024-09-30 12:29:24.201080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:12.384 [2024-09-30 12:29:24.201112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.384 [2024-09-30 12:29:24.203412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.384 [2024-09-30 12:29:24.203483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:12.384 BaseBdev3 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.384 BaseBdev4_malloc 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.384 true 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.384 [2024-09-30 12:29:24.272085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:12.384 [2024-09-30 12:29:24.272176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.384 [2024-09-30 12:29:24.272210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:12.384 [2024-09-30 12:29:24.272242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.384 [2024-09-30 12:29:24.274645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.384 [2024-09-30 12:29:24.274719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:12.384 BaseBdev4 00:12:12.384 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.643 [2024-09-30 12:29:24.284178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.643 [2024-09-30 12:29:24.286255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.643 [2024-09-30 12:29:24.286367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.643 [2024-09-30 12:29:24.286441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:12.643 [2024-09-30 12:29:24.286681] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:12.643 [2024-09-30 12:29:24.286727] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:12.643 [2024-09-30 12:29:24.286993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:12.643 [2024-09-30 12:29:24.287198] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:12.643 [2024-09-30 12:29:24.287238] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:12.643 [2024-09-30 12:29:24.287447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.643 "name": "raid_bdev1", 00:12:12.643 "uuid": "192d3c68-cc2f-41f9-a343-caf9dbebf6e9", 00:12:12.643 "strip_size_kb": 64, 00:12:12.643 "state": "online", 00:12:12.643 "raid_level": "concat", 00:12:12.643 "superblock": true, 00:12:12.643 "num_base_bdevs": 4, 00:12:12.643 "num_base_bdevs_discovered": 4, 00:12:12.643 "num_base_bdevs_operational": 4, 00:12:12.643 "base_bdevs_list": [ 00:12:12.643 { 00:12:12.643 "name": "BaseBdev1", 00:12:12.643 "uuid": "7c32d61b-112b-53c1-8e09-a7b49d11fd84", 00:12:12.643 "is_configured": true, 00:12:12.643 "data_offset": 2048, 00:12:12.643 "data_size": 63488 00:12:12.643 }, 00:12:12.643 { 00:12:12.643 "name": "BaseBdev2", 00:12:12.643 "uuid": "64ef0d83-9dfb-5d3e-9b05-a3c3b72c586c", 00:12:12.643 "is_configured": true, 00:12:12.643 "data_offset": 2048, 00:12:12.643 "data_size": 63488 00:12:12.643 }, 00:12:12.643 { 00:12:12.643 "name": "BaseBdev3", 00:12:12.643 "uuid": "4bd79c5e-0080-5d78-8a0c-7400d086ad04", 00:12:12.643 "is_configured": true, 00:12:12.643 "data_offset": 2048, 00:12:12.643 "data_size": 63488 00:12:12.643 }, 00:12:12.643 { 00:12:12.643 "name": "BaseBdev4", 00:12:12.643 "uuid": "16490930-812e-5e14-bec7-a8ae3e8c0331", 00:12:12.643 "is_configured": true, 00:12:12.643 "data_offset": 2048, 00:12:12.643 "data_size": 63488 00:12:12.643 } 00:12:12.643 ] 00:12:12.643 }' 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.643 12:29:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.903 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:12.903 12:29:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:13.162 [2024-09-30 12:29:24.840656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:14.099 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:14.099 12:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.099 12:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.099 12:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.099 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:14.099 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:14.099 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:14.099 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:14.099 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.099 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.099 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.100 "name": "raid_bdev1", 00:12:14.100 "uuid": "192d3c68-cc2f-41f9-a343-caf9dbebf6e9", 00:12:14.100 "strip_size_kb": 64, 00:12:14.100 "state": "online", 00:12:14.100 "raid_level": "concat", 00:12:14.100 "superblock": true, 00:12:14.100 "num_base_bdevs": 4, 00:12:14.100 "num_base_bdevs_discovered": 4, 00:12:14.100 "num_base_bdevs_operational": 4, 00:12:14.100 "base_bdevs_list": [ 00:12:14.100 { 00:12:14.100 "name": "BaseBdev1", 00:12:14.100 "uuid": "7c32d61b-112b-53c1-8e09-a7b49d11fd84", 00:12:14.100 "is_configured": true, 00:12:14.100 "data_offset": 2048, 00:12:14.100 "data_size": 63488 00:12:14.100 }, 00:12:14.100 { 00:12:14.100 "name": "BaseBdev2", 00:12:14.100 "uuid": "64ef0d83-9dfb-5d3e-9b05-a3c3b72c586c", 00:12:14.100 "is_configured": true, 00:12:14.100 "data_offset": 2048, 00:12:14.100 "data_size": 63488 00:12:14.100 }, 00:12:14.100 { 00:12:14.100 "name": "BaseBdev3", 00:12:14.100 "uuid": "4bd79c5e-0080-5d78-8a0c-7400d086ad04", 00:12:14.100 "is_configured": true, 00:12:14.100 "data_offset": 2048, 00:12:14.100 "data_size": 63488 00:12:14.100 }, 00:12:14.100 { 00:12:14.100 "name": "BaseBdev4", 00:12:14.100 "uuid": "16490930-812e-5e14-bec7-a8ae3e8c0331", 00:12:14.100 "is_configured": true, 00:12:14.100 "data_offset": 2048, 00:12:14.100 "data_size": 63488 00:12:14.100 } 00:12:14.100 ] 00:12:14.100 }' 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.100 12:29:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.360 [2024-09-30 12:29:26.200679] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.360 [2024-09-30 12:29:26.200782] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.360 [2024-09-30 12:29:26.203299] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.360 [2024-09-30 12:29:26.203406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.360 [2024-09-30 12:29:26.203475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.360 [2024-09-30 12:29:26.203520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:14.360 { 00:12:14.360 "results": [ 00:12:14.360 { 00:12:14.360 "job": "raid_bdev1", 00:12:14.360 "core_mask": "0x1", 00:12:14.360 "workload": "randrw", 00:12:14.360 "percentage": 50, 00:12:14.360 "status": "finished", 00:12:14.360 "queue_depth": 1, 00:12:14.360 "io_size": 131072, 00:12:14.360 "runtime": 1.360718, 00:12:14.360 "iops": 14225.577966926285, 00:12:14.360 "mibps": 1778.1972458657856, 00:12:14.360 "io_failed": 1, 00:12:14.360 "io_timeout": 0, 00:12:14.360 "avg_latency_us": 99.27122032076828, 00:12:14.360 "min_latency_us": 24.370305676855896, 00:12:14.360 "max_latency_us": 1359.3711790393013 00:12:14.360 } 00:12:14.360 ], 00:12:14.360 "core_count": 1 00:12:14.360 } 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72763 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72763 ']' 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72763 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72763 00:12:14.360 killing process with pid 72763 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72763' 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72763 00:12:14.360 [2024-09-30 12:29:26.238893] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.360 12:29:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72763 00:12:14.929 [2024-09-30 12:29:26.584315] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.340 12:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dYcG9IvJut 00:12:16.340 12:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:16.340 12:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:16.340 12:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:16.340 12:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:16.340 12:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:16.340 12:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:16.340 12:29:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:16.340 ************************************ 00:12:16.340 END TEST raid_read_error_test 00:12:16.340 ************************************ 00:12:16.340 00:12:16.340 real 0m4.950s 00:12:16.340 user 0m5.623s 00:12:16.340 sys 0m0.713s 00:12:16.340 12:29:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.340 12:29:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.340 12:29:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:16.340 12:29:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:16.340 12:29:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.340 12:29:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.340 ************************************ 00:12:16.340 START TEST raid_write_error_test 00:12:16.340 ************************************ 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fK1hxpxcGq 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72914 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72914 00:12:16.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72914 ']' 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.340 12:29:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.340 [2024-09-30 12:29:28.176043] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:16.341 [2024-09-30 12:29:28.176221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72914 ] 00:12:16.600 [2024-09-30 12:29:28.343545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.860 [2024-09-30 12:29:28.585645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.119 [2024-09-30 12:29:28.818398] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.119 [2024-09-30 12:29:28.818437] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.119 12:29:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:17.119 12:29:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:17.119 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.119 12:29:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:17.119 12:29:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.119 12:29:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 BaseBdev1_malloc 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 true 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 [2024-09-30 12:29:29.054021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:17.379 [2024-09-30 12:29:29.054121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.379 [2024-09-30 12:29:29.054154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:17.379 [2024-09-30 12:29:29.054184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.379 [2024-09-30 12:29:29.056537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.379 [2024-09-30 12:29:29.056612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:17.379 BaseBdev1 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 BaseBdev2_malloc 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 true 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 [2024-09-30 12:29:29.153764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:17.379 [2024-09-30 12:29:29.153859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.379 [2024-09-30 12:29:29.153891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:17.379 [2024-09-30 12:29:29.153923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.379 [2024-09-30 12:29:29.156204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.379 [2024-09-30 12:29:29.156277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:17.379 BaseBdev2 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 BaseBdev3_malloc 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 true 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 [2024-09-30 12:29:29.226631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:17.379 [2024-09-30 12:29:29.226719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.379 [2024-09-30 12:29:29.226758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:17.379 [2024-09-30 12:29:29.226789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.380 [2024-09-30 12:29:29.229131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.380 [2024-09-30 12:29:29.229170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:17.380 BaseBdev3 00:12:17.380 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.380 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.380 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:17.380 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.380 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.639 BaseBdev4_malloc 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.639 true 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.639 [2024-09-30 12:29:29.299507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:17.639 [2024-09-30 12:29:29.299601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.639 [2024-09-30 12:29:29.299634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:17.639 [2024-09-30 12:29:29.299664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.639 [2024-09-30 12:29:29.301966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.639 [2024-09-30 12:29:29.302002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:17.639 BaseBdev4 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.639 [2024-09-30 12:29:29.311574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.639 [2024-09-30 12:29:29.313615] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.639 [2024-09-30 12:29:29.313726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.639 [2024-09-30 12:29:29.313811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:17.639 [2024-09-30 12:29:29.314054] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:17.639 [2024-09-30 12:29:29.314102] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:17.639 [2024-09-30 12:29:29.314347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:17.639 [2024-09-30 12:29:29.314529] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:17.639 [2024-09-30 12:29:29.314565] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:17.639 [2024-09-30 12:29:29.314759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.639 "name": "raid_bdev1", 00:12:17.639 "uuid": "c76dc705-7b10-459b-a3e9-5c6f19104bed", 00:12:17.639 "strip_size_kb": 64, 00:12:17.639 "state": "online", 00:12:17.639 "raid_level": "concat", 00:12:17.639 "superblock": true, 00:12:17.639 "num_base_bdevs": 4, 00:12:17.639 "num_base_bdevs_discovered": 4, 00:12:17.639 "num_base_bdevs_operational": 4, 00:12:17.639 "base_bdevs_list": [ 00:12:17.639 { 00:12:17.639 "name": "BaseBdev1", 00:12:17.639 "uuid": "8c2a6066-20f3-5517-9801-1c2e50df7695", 00:12:17.639 "is_configured": true, 00:12:17.639 "data_offset": 2048, 00:12:17.639 "data_size": 63488 00:12:17.639 }, 00:12:17.639 { 00:12:17.639 "name": "BaseBdev2", 00:12:17.639 "uuid": "9dff3826-8261-5667-a751-013c830cf4e1", 00:12:17.639 "is_configured": true, 00:12:17.639 "data_offset": 2048, 00:12:17.639 "data_size": 63488 00:12:17.639 }, 00:12:17.639 { 00:12:17.639 "name": "BaseBdev3", 00:12:17.639 "uuid": "4f23b2c7-f70f-5352-92c5-5a407116a778", 00:12:17.639 "is_configured": true, 00:12:17.639 "data_offset": 2048, 00:12:17.639 "data_size": 63488 00:12:17.639 }, 00:12:17.639 { 00:12:17.639 "name": "BaseBdev4", 00:12:17.639 "uuid": "27ab1408-7d00-5c36-9059-5a7d98bf6a60", 00:12:17.639 "is_configured": true, 00:12:17.639 "data_offset": 2048, 00:12:17.639 "data_size": 63488 00:12:17.639 } 00:12:17.639 ] 00:12:17.639 }' 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.639 12:29:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.899 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:17.899 12:29:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:18.159 [2024-09-30 12:29:29.843971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.098 "name": "raid_bdev1", 00:12:19.098 "uuid": "c76dc705-7b10-459b-a3e9-5c6f19104bed", 00:12:19.098 "strip_size_kb": 64, 00:12:19.098 "state": "online", 00:12:19.098 "raid_level": "concat", 00:12:19.098 "superblock": true, 00:12:19.098 "num_base_bdevs": 4, 00:12:19.098 "num_base_bdevs_discovered": 4, 00:12:19.098 "num_base_bdevs_operational": 4, 00:12:19.098 "base_bdevs_list": [ 00:12:19.098 { 00:12:19.098 "name": "BaseBdev1", 00:12:19.098 "uuid": "8c2a6066-20f3-5517-9801-1c2e50df7695", 00:12:19.098 "is_configured": true, 00:12:19.098 "data_offset": 2048, 00:12:19.098 "data_size": 63488 00:12:19.098 }, 00:12:19.098 { 00:12:19.098 "name": "BaseBdev2", 00:12:19.098 "uuid": "9dff3826-8261-5667-a751-013c830cf4e1", 00:12:19.098 "is_configured": true, 00:12:19.098 "data_offset": 2048, 00:12:19.098 "data_size": 63488 00:12:19.098 }, 00:12:19.098 { 00:12:19.098 "name": "BaseBdev3", 00:12:19.098 "uuid": "4f23b2c7-f70f-5352-92c5-5a407116a778", 00:12:19.098 "is_configured": true, 00:12:19.098 "data_offset": 2048, 00:12:19.098 "data_size": 63488 00:12:19.098 }, 00:12:19.098 { 00:12:19.098 "name": "BaseBdev4", 00:12:19.098 "uuid": "27ab1408-7d00-5c36-9059-5a7d98bf6a60", 00:12:19.098 "is_configured": true, 00:12:19.098 "data_offset": 2048, 00:12:19.098 "data_size": 63488 00:12:19.098 } 00:12:19.098 ] 00:12:19.098 }' 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.098 12:29:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.357 12:29:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:19.357 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.358 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.358 [2024-09-30 12:29:31.244964] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:19.358 [2024-09-30 12:29:31.245052] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.358 [2024-09-30 12:29:31.247608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.358 [2024-09-30 12:29:31.247714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.358 [2024-09-30 12:29:31.247783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.358 [2024-09-30 12:29:31.247797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:19.358 { 00:12:19.358 "results": [ 00:12:19.358 { 00:12:19.358 "job": "raid_bdev1", 00:12:19.358 "core_mask": "0x1", 00:12:19.358 "workload": "randrw", 00:12:19.358 "percentage": 50, 00:12:19.358 "status": "finished", 00:12:19.358 "queue_depth": 1, 00:12:19.358 "io_size": 131072, 00:12:19.358 "runtime": 1.401785, 00:12:19.358 "iops": 14344.567818888061, 00:12:19.358 "mibps": 1793.0709773610076, 00:12:19.358 "io_failed": 1, 00:12:19.358 "io_timeout": 0, 00:12:19.358 "avg_latency_us": 98.38787898529434, 00:12:19.358 "min_latency_us": 24.482096069868994, 00:12:19.358 "max_latency_us": 1395.1441048034935 00:12:19.358 } 00:12:19.358 ], 00:12:19.358 "core_count": 1 00:12:19.358 } 00:12:19.358 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.358 12:29:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72914 00:12:19.358 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72914 ']' 00:12:19.358 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72914 00:12:19.617 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:19.617 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.617 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72914 00:12:19.617 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.617 killing process with pid 72914 00:12:19.617 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.617 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72914' 00:12:19.617 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72914 00:12:19.617 [2024-09-30 12:29:31.294102] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.617 12:29:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72914 00:12:19.876 [2024-09-30 12:29:31.632678] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.258 12:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fK1hxpxcGq 00:12:21.258 12:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:21.258 12:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:21.258 12:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:21.258 12:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:21.258 ************************************ 00:12:21.258 END TEST raid_write_error_test 00:12:21.258 ************************************ 00:12:21.258 12:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:21.258 12:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:21.258 12:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:21.258 00:12:21.258 real 0m4.957s 00:12:21.258 user 0m5.673s 00:12:21.258 sys 0m0.712s 00:12:21.258 12:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.258 12:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.258 12:29:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:21.258 12:29:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:21.258 12:29:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:21.258 12:29:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.258 12:29:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.258 ************************************ 00:12:21.258 START TEST raid_state_function_test 00:12:21.258 ************************************ 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73058 00:12:21.258 Process raid pid: 73058 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73058' 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73058 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73058 ']' 00:12:21.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.258 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.518 [2024-09-30 12:29:33.197734] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:21.518 [2024-09-30 12:29:33.198236] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.518 [2024-09-30 12:29:33.361841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.777 [2024-09-30 12:29:33.605543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.037 [2024-09-30 12:29:33.840167] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.037 [2024-09-30 12:29:33.840203] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.297 [2024-09-30 12:29:34.026255] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.297 [2024-09-30 12:29:34.026356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.297 [2024-09-30 12:29:34.026401] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:22.297 [2024-09-30 12:29:34.026424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.297 [2024-09-30 12:29:34.026442] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:22.297 [2024-09-30 12:29:34.026462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:22.297 [2024-09-30 12:29:34.026479] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:22.297 [2024-09-30 12:29:34.026500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.297 "name": "Existed_Raid", 00:12:22.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.297 "strip_size_kb": 0, 00:12:22.297 "state": "configuring", 00:12:22.297 "raid_level": "raid1", 00:12:22.297 "superblock": false, 00:12:22.297 "num_base_bdevs": 4, 00:12:22.297 "num_base_bdevs_discovered": 0, 00:12:22.297 "num_base_bdevs_operational": 4, 00:12:22.297 "base_bdevs_list": [ 00:12:22.297 { 00:12:22.297 "name": "BaseBdev1", 00:12:22.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.297 "is_configured": false, 00:12:22.297 "data_offset": 0, 00:12:22.297 "data_size": 0 00:12:22.297 }, 00:12:22.297 { 00:12:22.297 "name": "BaseBdev2", 00:12:22.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.297 "is_configured": false, 00:12:22.297 "data_offset": 0, 00:12:22.297 "data_size": 0 00:12:22.297 }, 00:12:22.297 { 00:12:22.297 "name": "BaseBdev3", 00:12:22.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.297 "is_configured": false, 00:12:22.297 "data_offset": 0, 00:12:22.297 "data_size": 0 00:12:22.297 }, 00:12:22.297 { 00:12:22.297 "name": "BaseBdev4", 00:12:22.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.297 "is_configured": false, 00:12:22.297 "data_offset": 0, 00:12:22.297 "data_size": 0 00:12:22.297 } 00:12:22.297 ] 00:12:22.297 }' 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.297 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.868 [2024-09-30 12:29:34.469400] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.868 [2024-09-30 12:29:34.469440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.868 [2024-09-30 12:29:34.481405] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.868 [2024-09-30 12:29:34.481483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.868 [2024-09-30 12:29:34.481509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:22.868 [2024-09-30 12:29:34.481532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.868 [2024-09-30 12:29:34.481549] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:22.868 [2024-09-30 12:29:34.481569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:22.868 [2024-09-30 12:29:34.481586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:22.868 [2024-09-30 12:29:34.481606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.868 [2024-09-30 12:29:34.563237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.868 BaseBdev1 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.868 [ 00:12:22.868 { 00:12:22.868 "name": "BaseBdev1", 00:12:22.868 "aliases": [ 00:12:22.868 "62266e41-e1c8-4b19-9051-2e17d8b4a5c5" 00:12:22.868 ], 00:12:22.868 "product_name": "Malloc disk", 00:12:22.868 "block_size": 512, 00:12:22.868 "num_blocks": 65536, 00:12:22.868 "uuid": "62266e41-e1c8-4b19-9051-2e17d8b4a5c5", 00:12:22.868 "assigned_rate_limits": { 00:12:22.868 "rw_ios_per_sec": 0, 00:12:22.868 "rw_mbytes_per_sec": 0, 00:12:22.868 "r_mbytes_per_sec": 0, 00:12:22.868 "w_mbytes_per_sec": 0 00:12:22.868 }, 00:12:22.868 "claimed": true, 00:12:22.868 "claim_type": "exclusive_write", 00:12:22.868 "zoned": false, 00:12:22.868 "supported_io_types": { 00:12:22.868 "read": true, 00:12:22.868 "write": true, 00:12:22.868 "unmap": true, 00:12:22.868 "flush": true, 00:12:22.868 "reset": true, 00:12:22.868 "nvme_admin": false, 00:12:22.868 "nvme_io": false, 00:12:22.868 "nvme_io_md": false, 00:12:22.868 "write_zeroes": true, 00:12:22.868 "zcopy": true, 00:12:22.868 "get_zone_info": false, 00:12:22.868 "zone_management": false, 00:12:22.868 "zone_append": false, 00:12:22.868 "compare": false, 00:12:22.868 "compare_and_write": false, 00:12:22.868 "abort": true, 00:12:22.868 "seek_hole": false, 00:12:22.868 "seek_data": false, 00:12:22.868 "copy": true, 00:12:22.868 "nvme_iov_md": false 00:12:22.868 }, 00:12:22.868 "memory_domains": [ 00:12:22.868 { 00:12:22.868 "dma_device_id": "system", 00:12:22.868 "dma_device_type": 1 00:12:22.868 }, 00:12:22.868 { 00:12:22.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.868 "dma_device_type": 2 00:12:22.868 } 00:12:22.868 ], 00:12:22.868 "driver_specific": {} 00:12:22.868 } 00:12:22.868 ] 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.868 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.868 "name": "Existed_Raid", 00:12:22.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.868 "strip_size_kb": 0, 00:12:22.868 "state": "configuring", 00:12:22.868 "raid_level": "raid1", 00:12:22.868 "superblock": false, 00:12:22.868 "num_base_bdevs": 4, 00:12:22.868 "num_base_bdevs_discovered": 1, 00:12:22.868 "num_base_bdevs_operational": 4, 00:12:22.868 "base_bdevs_list": [ 00:12:22.868 { 00:12:22.868 "name": "BaseBdev1", 00:12:22.868 "uuid": "62266e41-e1c8-4b19-9051-2e17d8b4a5c5", 00:12:22.868 "is_configured": true, 00:12:22.868 "data_offset": 0, 00:12:22.868 "data_size": 65536 00:12:22.868 }, 00:12:22.868 { 00:12:22.868 "name": "BaseBdev2", 00:12:22.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.868 "is_configured": false, 00:12:22.868 "data_offset": 0, 00:12:22.868 "data_size": 0 00:12:22.868 }, 00:12:22.868 { 00:12:22.869 "name": "BaseBdev3", 00:12:22.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.869 "is_configured": false, 00:12:22.869 "data_offset": 0, 00:12:22.869 "data_size": 0 00:12:22.869 }, 00:12:22.869 { 00:12:22.869 "name": "BaseBdev4", 00:12:22.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.869 "is_configured": false, 00:12:22.869 "data_offset": 0, 00:12:22.869 "data_size": 0 00:12:22.869 } 00:12:22.869 ] 00:12:22.869 }' 00:12:22.869 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.869 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.438 [2024-09-30 12:29:35.058392] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.438 [2024-09-30 12:29:35.058501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.438 [2024-09-30 12:29:35.070421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.438 [2024-09-30 12:29:35.072504] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:23.438 [2024-09-30 12:29:35.072584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:23.438 [2024-09-30 12:29:35.072618] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:23.438 [2024-09-30 12:29:35.072642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:23.438 [2024-09-30 12:29:35.072669] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:23.438 [2024-09-30 12:29:35.072690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.438 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.438 "name": "Existed_Raid", 00:12:23.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.438 "strip_size_kb": 0, 00:12:23.438 "state": "configuring", 00:12:23.438 "raid_level": "raid1", 00:12:23.438 "superblock": false, 00:12:23.438 "num_base_bdevs": 4, 00:12:23.438 "num_base_bdevs_discovered": 1, 00:12:23.438 "num_base_bdevs_operational": 4, 00:12:23.438 "base_bdevs_list": [ 00:12:23.438 { 00:12:23.438 "name": "BaseBdev1", 00:12:23.438 "uuid": "62266e41-e1c8-4b19-9051-2e17d8b4a5c5", 00:12:23.438 "is_configured": true, 00:12:23.438 "data_offset": 0, 00:12:23.438 "data_size": 65536 00:12:23.438 }, 00:12:23.438 { 00:12:23.438 "name": "BaseBdev2", 00:12:23.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.438 "is_configured": false, 00:12:23.438 "data_offset": 0, 00:12:23.438 "data_size": 0 00:12:23.438 }, 00:12:23.438 { 00:12:23.438 "name": "BaseBdev3", 00:12:23.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.438 "is_configured": false, 00:12:23.439 "data_offset": 0, 00:12:23.439 "data_size": 0 00:12:23.439 }, 00:12:23.439 { 00:12:23.439 "name": "BaseBdev4", 00:12:23.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.439 "is_configured": false, 00:12:23.439 "data_offset": 0, 00:12:23.439 "data_size": 0 00:12:23.439 } 00:12:23.439 ] 00:12:23.439 }' 00:12:23.439 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.439 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.698 [2024-09-30 12:29:35.561235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.698 BaseBdev2 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.698 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.698 [ 00:12:23.698 { 00:12:23.698 "name": "BaseBdev2", 00:12:23.698 "aliases": [ 00:12:23.698 "26ea1a7b-8c17-4893-a51d-77b466b0365c" 00:12:23.698 ], 00:12:23.698 "product_name": "Malloc disk", 00:12:23.698 "block_size": 512, 00:12:23.698 "num_blocks": 65536, 00:12:23.698 "uuid": "26ea1a7b-8c17-4893-a51d-77b466b0365c", 00:12:23.698 "assigned_rate_limits": { 00:12:23.698 "rw_ios_per_sec": 0, 00:12:23.698 "rw_mbytes_per_sec": 0, 00:12:23.698 "r_mbytes_per_sec": 0, 00:12:23.698 "w_mbytes_per_sec": 0 00:12:23.698 }, 00:12:23.698 "claimed": true, 00:12:23.698 "claim_type": "exclusive_write", 00:12:23.698 "zoned": false, 00:12:23.698 "supported_io_types": { 00:12:23.698 "read": true, 00:12:23.698 "write": true, 00:12:23.698 "unmap": true, 00:12:23.698 "flush": true, 00:12:23.698 "reset": true, 00:12:23.698 "nvme_admin": false, 00:12:23.698 "nvme_io": false, 00:12:23.698 "nvme_io_md": false, 00:12:23.698 "write_zeroes": true, 00:12:23.698 "zcopy": true, 00:12:23.698 "get_zone_info": false, 00:12:23.698 "zone_management": false, 00:12:23.698 "zone_append": false, 00:12:23.698 "compare": false, 00:12:23.698 "compare_and_write": false, 00:12:23.959 "abort": true, 00:12:23.959 "seek_hole": false, 00:12:23.959 "seek_data": false, 00:12:23.959 "copy": true, 00:12:23.959 "nvme_iov_md": false 00:12:23.959 }, 00:12:23.959 "memory_domains": [ 00:12:23.959 { 00:12:23.959 "dma_device_id": "system", 00:12:23.959 "dma_device_type": 1 00:12:23.959 }, 00:12:23.959 { 00:12:23.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.959 "dma_device_type": 2 00:12:23.959 } 00:12:23.959 ], 00:12:23.959 "driver_specific": {} 00:12:23.959 } 00:12:23.959 ] 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.959 "name": "Existed_Raid", 00:12:23.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.959 "strip_size_kb": 0, 00:12:23.959 "state": "configuring", 00:12:23.959 "raid_level": "raid1", 00:12:23.959 "superblock": false, 00:12:23.959 "num_base_bdevs": 4, 00:12:23.959 "num_base_bdevs_discovered": 2, 00:12:23.959 "num_base_bdevs_operational": 4, 00:12:23.959 "base_bdevs_list": [ 00:12:23.959 { 00:12:23.959 "name": "BaseBdev1", 00:12:23.959 "uuid": "62266e41-e1c8-4b19-9051-2e17d8b4a5c5", 00:12:23.959 "is_configured": true, 00:12:23.959 "data_offset": 0, 00:12:23.959 "data_size": 65536 00:12:23.959 }, 00:12:23.959 { 00:12:23.959 "name": "BaseBdev2", 00:12:23.959 "uuid": "26ea1a7b-8c17-4893-a51d-77b466b0365c", 00:12:23.959 "is_configured": true, 00:12:23.959 "data_offset": 0, 00:12:23.959 "data_size": 65536 00:12:23.959 }, 00:12:23.959 { 00:12:23.959 "name": "BaseBdev3", 00:12:23.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.959 "is_configured": false, 00:12:23.959 "data_offset": 0, 00:12:23.959 "data_size": 0 00:12:23.959 }, 00:12:23.959 { 00:12:23.959 "name": "BaseBdev4", 00:12:23.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.959 "is_configured": false, 00:12:23.959 "data_offset": 0, 00:12:23.959 "data_size": 0 00:12:23.959 } 00:12:23.959 ] 00:12:23.959 }' 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.959 12:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.219 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:24.219 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.219 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.478 [2024-09-30 12:29:36.124027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.478 BaseBdev3 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.478 [ 00:12:24.478 { 00:12:24.478 "name": "BaseBdev3", 00:12:24.478 "aliases": [ 00:12:24.478 "3fc12274-f02c-4a47-94d9-353326e4f218" 00:12:24.478 ], 00:12:24.478 "product_name": "Malloc disk", 00:12:24.478 "block_size": 512, 00:12:24.478 "num_blocks": 65536, 00:12:24.478 "uuid": "3fc12274-f02c-4a47-94d9-353326e4f218", 00:12:24.478 "assigned_rate_limits": { 00:12:24.478 "rw_ios_per_sec": 0, 00:12:24.478 "rw_mbytes_per_sec": 0, 00:12:24.478 "r_mbytes_per_sec": 0, 00:12:24.478 "w_mbytes_per_sec": 0 00:12:24.478 }, 00:12:24.478 "claimed": true, 00:12:24.478 "claim_type": "exclusive_write", 00:12:24.478 "zoned": false, 00:12:24.478 "supported_io_types": { 00:12:24.478 "read": true, 00:12:24.478 "write": true, 00:12:24.478 "unmap": true, 00:12:24.478 "flush": true, 00:12:24.478 "reset": true, 00:12:24.478 "nvme_admin": false, 00:12:24.478 "nvme_io": false, 00:12:24.478 "nvme_io_md": false, 00:12:24.478 "write_zeroes": true, 00:12:24.478 "zcopy": true, 00:12:24.478 "get_zone_info": false, 00:12:24.478 "zone_management": false, 00:12:24.478 "zone_append": false, 00:12:24.478 "compare": false, 00:12:24.478 "compare_and_write": false, 00:12:24.478 "abort": true, 00:12:24.478 "seek_hole": false, 00:12:24.478 "seek_data": false, 00:12:24.478 "copy": true, 00:12:24.478 "nvme_iov_md": false 00:12:24.478 }, 00:12:24.478 "memory_domains": [ 00:12:24.478 { 00:12:24.478 "dma_device_id": "system", 00:12:24.478 "dma_device_type": 1 00:12:24.478 }, 00:12:24.478 { 00:12:24.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.478 "dma_device_type": 2 00:12:24.478 } 00:12:24.478 ], 00:12:24.478 "driver_specific": {} 00:12:24.478 } 00:12:24.478 ] 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.478 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.479 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.479 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.479 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.479 "name": "Existed_Raid", 00:12:24.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.479 "strip_size_kb": 0, 00:12:24.479 "state": "configuring", 00:12:24.479 "raid_level": "raid1", 00:12:24.479 "superblock": false, 00:12:24.479 "num_base_bdevs": 4, 00:12:24.479 "num_base_bdevs_discovered": 3, 00:12:24.479 "num_base_bdevs_operational": 4, 00:12:24.479 "base_bdevs_list": [ 00:12:24.479 { 00:12:24.479 "name": "BaseBdev1", 00:12:24.479 "uuid": "62266e41-e1c8-4b19-9051-2e17d8b4a5c5", 00:12:24.479 "is_configured": true, 00:12:24.479 "data_offset": 0, 00:12:24.479 "data_size": 65536 00:12:24.479 }, 00:12:24.479 { 00:12:24.479 "name": "BaseBdev2", 00:12:24.479 "uuid": "26ea1a7b-8c17-4893-a51d-77b466b0365c", 00:12:24.479 "is_configured": true, 00:12:24.479 "data_offset": 0, 00:12:24.479 "data_size": 65536 00:12:24.479 }, 00:12:24.479 { 00:12:24.479 "name": "BaseBdev3", 00:12:24.479 "uuid": "3fc12274-f02c-4a47-94d9-353326e4f218", 00:12:24.479 "is_configured": true, 00:12:24.479 "data_offset": 0, 00:12:24.479 "data_size": 65536 00:12:24.479 }, 00:12:24.479 { 00:12:24.479 "name": "BaseBdev4", 00:12:24.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.479 "is_configured": false, 00:12:24.479 "data_offset": 0, 00:12:24.479 "data_size": 0 00:12:24.479 } 00:12:24.479 ] 00:12:24.479 }' 00:12:24.479 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.479 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.743 [2024-09-30 12:29:36.609117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:24.743 [2024-09-30 12:29:36.609171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:24.743 [2024-09-30 12:29:36.609184] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:24.743 [2024-09-30 12:29:36.609473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:24.743 [2024-09-30 12:29:36.609648] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:24.743 [2024-09-30 12:29:36.609661] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:24.743 [2024-09-30 12:29:36.609974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.743 BaseBdev4 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.743 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.743 [ 00:12:24.743 { 00:12:24.743 "name": "BaseBdev4", 00:12:24.743 "aliases": [ 00:12:24.743 "c12b7972-dc77-42fa-a2ae-4315609ec187" 00:12:24.743 ], 00:12:24.743 "product_name": "Malloc disk", 00:12:24.743 "block_size": 512, 00:12:24.743 "num_blocks": 65536, 00:12:24.743 "uuid": "c12b7972-dc77-42fa-a2ae-4315609ec187", 00:12:24.743 "assigned_rate_limits": { 00:12:24.743 "rw_ios_per_sec": 0, 00:12:24.743 "rw_mbytes_per_sec": 0, 00:12:24.743 "r_mbytes_per_sec": 0, 00:12:24.743 "w_mbytes_per_sec": 0 00:12:24.743 }, 00:12:25.002 "claimed": true, 00:12:25.002 "claim_type": "exclusive_write", 00:12:25.002 "zoned": false, 00:12:25.002 "supported_io_types": { 00:12:25.002 "read": true, 00:12:25.002 "write": true, 00:12:25.002 "unmap": true, 00:12:25.002 "flush": true, 00:12:25.002 "reset": true, 00:12:25.002 "nvme_admin": false, 00:12:25.002 "nvme_io": false, 00:12:25.002 "nvme_io_md": false, 00:12:25.002 "write_zeroes": true, 00:12:25.002 "zcopy": true, 00:12:25.002 "get_zone_info": false, 00:12:25.002 "zone_management": false, 00:12:25.002 "zone_append": false, 00:12:25.002 "compare": false, 00:12:25.002 "compare_and_write": false, 00:12:25.002 "abort": true, 00:12:25.002 "seek_hole": false, 00:12:25.002 "seek_data": false, 00:12:25.002 "copy": true, 00:12:25.002 "nvme_iov_md": false 00:12:25.002 }, 00:12:25.002 "memory_domains": [ 00:12:25.002 { 00:12:25.002 "dma_device_id": "system", 00:12:25.002 "dma_device_type": 1 00:12:25.002 }, 00:12:25.002 { 00:12:25.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.002 "dma_device_type": 2 00:12:25.002 } 00:12:25.002 ], 00:12:25.002 "driver_specific": {} 00:12:25.002 } 00:12:25.002 ] 00:12:25.002 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.002 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:25.002 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:25.002 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:25.002 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:25.002 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.002 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.002 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.002 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.003 "name": "Existed_Raid", 00:12:25.003 "uuid": "8ea0649a-5814-465f-aea2-506ac896f2b0", 00:12:25.003 "strip_size_kb": 0, 00:12:25.003 "state": "online", 00:12:25.003 "raid_level": "raid1", 00:12:25.003 "superblock": false, 00:12:25.003 "num_base_bdevs": 4, 00:12:25.003 "num_base_bdevs_discovered": 4, 00:12:25.003 "num_base_bdevs_operational": 4, 00:12:25.003 "base_bdevs_list": [ 00:12:25.003 { 00:12:25.003 "name": "BaseBdev1", 00:12:25.003 "uuid": "62266e41-e1c8-4b19-9051-2e17d8b4a5c5", 00:12:25.003 "is_configured": true, 00:12:25.003 "data_offset": 0, 00:12:25.003 "data_size": 65536 00:12:25.003 }, 00:12:25.003 { 00:12:25.003 "name": "BaseBdev2", 00:12:25.003 "uuid": "26ea1a7b-8c17-4893-a51d-77b466b0365c", 00:12:25.003 "is_configured": true, 00:12:25.003 "data_offset": 0, 00:12:25.003 "data_size": 65536 00:12:25.003 }, 00:12:25.003 { 00:12:25.003 "name": "BaseBdev3", 00:12:25.003 "uuid": "3fc12274-f02c-4a47-94d9-353326e4f218", 00:12:25.003 "is_configured": true, 00:12:25.003 "data_offset": 0, 00:12:25.003 "data_size": 65536 00:12:25.003 }, 00:12:25.003 { 00:12:25.003 "name": "BaseBdev4", 00:12:25.003 "uuid": "c12b7972-dc77-42fa-a2ae-4315609ec187", 00:12:25.003 "is_configured": true, 00:12:25.003 "data_offset": 0, 00:12:25.003 "data_size": 65536 00:12:25.003 } 00:12:25.003 ] 00:12:25.003 }' 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.003 12:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.262 [2024-09-30 12:29:37.072667] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.262 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.262 "name": "Existed_Raid", 00:12:25.262 "aliases": [ 00:12:25.262 "8ea0649a-5814-465f-aea2-506ac896f2b0" 00:12:25.262 ], 00:12:25.262 "product_name": "Raid Volume", 00:12:25.262 "block_size": 512, 00:12:25.262 "num_blocks": 65536, 00:12:25.262 "uuid": "8ea0649a-5814-465f-aea2-506ac896f2b0", 00:12:25.262 "assigned_rate_limits": { 00:12:25.262 "rw_ios_per_sec": 0, 00:12:25.262 "rw_mbytes_per_sec": 0, 00:12:25.262 "r_mbytes_per_sec": 0, 00:12:25.262 "w_mbytes_per_sec": 0 00:12:25.262 }, 00:12:25.262 "claimed": false, 00:12:25.262 "zoned": false, 00:12:25.262 "supported_io_types": { 00:12:25.262 "read": true, 00:12:25.262 "write": true, 00:12:25.262 "unmap": false, 00:12:25.262 "flush": false, 00:12:25.262 "reset": true, 00:12:25.262 "nvme_admin": false, 00:12:25.262 "nvme_io": false, 00:12:25.262 "nvme_io_md": false, 00:12:25.262 "write_zeroes": true, 00:12:25.262 "zcopy": false, 00:12:25.262 "get_zone_info": false, 00:12:25.262 "zone_management": false, 00:12:25.262 "zone_append": false, 00:12:25.262 "compare": false, 00:12:25.262 "compare_and_write": false, 00:12:25.262 "abort": false, 00:12:25.262 "seek_hole": false, 00:12:25.262 "seek_data": false, 00:12:25.262 "copy": false, 00:12:25.262 "nvme_iov_md": false 00:12:25.262 }, 00:12:25.262 "memory_domains": [ 00:12:25.262 { 00:12:25.262 "dma_device_id": "system", 00:12:25.262 "dma_device_type": 1 00:12:25.262 }, 00:12:25.262 { 00:12:25.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.262 "dma_device_type": 2 00:12:25.262 }, 00:12:25.262 { 00:12:25.262 "dma_device_id": "system", 00:12:25.262 "dma_device_type": 1 00:12:25.262 }, 00:12:25.262 { 00:12:25.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.262 "dma_device_type": 2 00:12:25.262 }, 00:12:25.262 { 00:12:25.262 "dma_device_id": "system", 00:12:25.262 "dma_device_type": 1 00:12:25.262 }, 00:12:25.262 { 00:12:25.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.262 "dma_device_type": 2 00:12:25.262 }, 00:12:25.262 { 00:12:25.262 "dma_device_id": "system", 00:12:25.262 "dma_device_type": 1 00:12:25.262 }, 00:12:25.262 { 00:12:25.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.262 "dma_device_type": 2 00:12:25.262 } 00:12:25.262 ], 00:12:25.262 "driver_specific": { 00:12:25.262 "raid": { 00:12:25.262 "uuid": "8ea0649a-5814-465f-aea2-506ac896f2b0", 00:12:25.262 "strip_size_kb": 0, 00:12:25.262 "state": "online", 00:12:25.263 "raid_level": "raid1", 00:12:25.263 "superblock": false, 00:12:25.263 "num_base_bdevs": 4, 00:12:25.263 "num_base_bdevs_discovered": 4, 00:12:25.263 "num_base_bdevs_operational": 4, 00:12:25.263 "base_bdevs_list": [ 00:12:25.263 { 00:12:25.263 "name": "BaseBdev1", 00:12:25.263 "uuid": "62266e41-e1c8-4b19-9051-2e17d8b4a5c5", 00:12:25.263 "is_configured": true, 00:12:25.263 "data_offset": 0, 00:12:25.263 "data_size": 65536 00:12:25.263 }, 00:12:25.263 { 00:12:25.263 "name": "BaseBdev2", 00:12:25.263 "uuid": "26ea1a7b-8c17-4893-a51d-77b466b0365c", 00:12:25.263 "is_configured": true, 00:12:25.263 "data_offset": 0, 00:12:25.263 "data_size": 65536 00:12:25.263 }, 00:12:25.263 { 00:12:25.263 "name": "BaseBdev3", 00:12:25.263 "uuid": "3fc12274-f02c-4a47-94d9-353326e4f218", 00:12:25.263 "is_configured": true, 00:12:25.263 "data_offset": 0, 00:12:25.263 "data_size": 65536 00:12:25.263 }, 00:12:25.263 { 00:12:25.263 "name": "BaseBdev4", 00:12:25.263 "uuid": "c12b7972-dc77-42fa-a2ae-4315609ec187", 00:12:25.263 "is_configured": true, 00:12:25.263 "data_offset": 0, 00:12:25.263 "data_size": 65536 00:12:25.263 } 00:12:25.263 ] 00:12:25.263 } 00:12:25.263 } 00:12:25.263 }' 00:12:25.263 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.263 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:25.263 BaseBdev2 00:12:25.263 BaseBdev3 00:12:25.263 BaseBdev4' 00:12:25.263 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.522 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.523 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:25.523 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.523 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.523 [2024-09-30 12:29:37.343955] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.782 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.782 "name": "Existed_Raid", 00:12:25.782 "uuid": "8ea0649a-5814-465f-aea2-506ac896f2b0", 00:12:25.782 "strip_size_kb": 0, 00:12:25.782 "state": "online", 00:12:25.782 "raid_level": "raid1", 00:12:25.782 "superblock": false, 00:12:25.782 "num_base_bdevs": 4, 00:12:25.782 "num_base_bdevs_discovered": 3, 00:12:25.782 "num_base_bdevs_operational": 3, 00:12:25.782 "base_bdevs_list": [ 00:12:25.782 { 00:12:25.782 "name": null, 00:12:25.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.782 "is_configured": false, 00:12:25.782 "data_offset": 0, 00:12:25.782 "data_size": 65536 00:12:25.782 }, 00:12:25.782 { 00:12:25.782 "name": "BaseBdev2", 00:12:25.782 "uuid": "26ea1a7b-8c17-4893-a51d-77b466b0365c", 00:12:25.782 "is_configured": true, 00:12:25.782 "data_offset": 0, 00:12:25.782 "data_size": 65536 00:12:25.782 }, 00:12:25.782 { 00:12:25.783 "name": "BaseBdev3", 00:12:25.783 "uuid": "3fc12274-f02c-4a47-94d9-353326e4f218", 00:12:25.783 "is_configured": true, 00:12:25.783 "data_offset": 0, 00:12:25.783 "data_size": 65536 00:12:25.783 }, 00:12:25.783 { 00:12:25.783 "name": "BaseBdev4", 00:12:25.783 "uuid": "c12b7972-dc77-42fa-a2ae-4315609ec187", 00:12:25.783 "is_configured": true, 00:12:25.783 "data_offset": 0, 00:12:25.783 "data_size": 65536 00:12:25.783 } 00:12:25.783 ] 00:12:25.783 }' 00:12:25.783 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.783 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.042 12:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.042 [2024-09-30 12:29:37.905843] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.302 [2024-09-30 12:29:38.054126] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.302 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.561 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.562 [2024-09-30 12:29:38.213844] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:26.562 [2024-09-30 12:29:38.214021] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.562 [2024-09-30 12:29:38.314983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.562 [2024-09-30 12:29:38.315125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.562 [2024-09-30 12:29:38.315171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.562 BaseBdev2 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.562 [ 00:12:26.562 { 00:12:26.562 "name": "BaseBdev2", 00:12:26.562 "aliases": [ 00:12:26.562 "552ebc3b-00a5-4a4e-9a7e-0c3b3c742dd4" 00:12:26.562 ], 00:12:26.562 "product_name": "Malloc disk", 00:12:26.562 "block_size": 512, 00:12:26.562 "num_blocks": 65536, 00:12:26.562 "uuid": "552ebc3b-00a5-4a4e-9a7e-0c3b3c742dd4", 00:12:26.562 "assigned_rate_limits": { 00:12:26.562 "rw_ios_per_sec": 0, 00:12:26.562 "rw_mbytes_per_sec": 0, 00:12:26.562 "r_mbytes_per_sec": 0, 00:12:26.562 "w_mbytes_per_sec": 0 00:12:26.562 }, 00:12:26.562 "claimed": false, 00:12:26.562 "zoned": false, 00:12:26.562 "supported_io_types": { 00:12:26.562 "read": true, 00:12:26.562 "write": true, 00:12:26.562 "unmap": true, 00:12:26.562 "flush": true, 00:12:26.562 "reset": true, 00:12:26.562 "nvme_admin": false, 00:12:26.562 "nvme_io": false, 00:12:26.562 "nvme_io_md": false, 00:12:26.562 "write_zeroes": true, 00:12:26.562 "zcopy": true, 00:12:26.562 "get_zone_info": false, 00:12:26.562 "zone_management": false, 00:12:26.562 "zone_append": false, 00:12:26.562 "compare": false, 00:12:26.562 "compare_and_write": false, 00:12:26.562 "abort": true, 00:12:26.562 "seek_hole": false, 00:12:26.562 "seek_data": false, 00:12:26.562 "copy": true, 00:12:26.562 "nvme_iov_md": false 00:12:26.562 }, 00:12:26.562 "memory_domains": [ 00:12:26.562 { 00:12:26.562 "dma_device_id": "system", 00:12:26.562 "dma_device_type": 1 00:12:26.562 }, 00:12:26.562 { 00:12:26.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.562 "dma_device_type": 2 00:12:26.562 } 00:12:26.562 ], 00:12:26.562 "driver_specific": {} 00:12:26.562 } 00:12:26.562 ] 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.562 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.823 BaseBdev3 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.823 [ 00:12:26.823 { 00:12:26.823 "name": "BaseBdev3", 00:12:26.823 "aliases": [ 00:12:26.823 "1925cc98-2a0a-4d48-bbd8-89e97b91c5ca" 00:12:26.823 ], 00:12:26.823 "product_name": "Malloc disk", 00:12:26.823 "block_size": 512, 00:12:26.823 "num_blocks": 65536, 00:12:26.823 "uuid": "1925cc98-2a0a-4d48-bbd8-89e97b91c5ca", 00:12:26.823 "assigned_rate_limits": { 00:12:26.823 "rw_ios_per_sec": 0, 00:12:26.823 "rw_mbytes_per_sec": 0, 00:12:26.823 "r_mbytes_per_sec": 0, 00:12:26.823 "w_mbytes_per_sec": 0 00:12:26.823 }, 00:12:26.823 "claimed": false, 00:12:26.823 "zoned": false, 00:12:26.823 "supported_io_types": { 00:12:26.823 "read": true, 00:12:26.823 "write": true, 00:12:26.823 "unmap": true, 00:12:26.823 "flush": true, 00:12:26.823 "reset": true, 00:12:26.823 "nvme_admin": false, 00:12:26.823 "nvme_io": false, 00:12:26.823 "nvme_io_md": false, 00:12:26.823 "write_zeroes": true, 00:12:26.823 "zcopy": true, 00:12:26.823 "get_zone_info": false, 00:12:26.823 "zone_management": false, 00:12:26.823 "zone_append": false, 00:12:26.823 "compare": false, 00:12:26.823 "compare_and_write": false, 00:12:26.823 "abort": true, 00:12:26.823 "seek_hole": false, 00:12:26.823 "seek_data": false, 00:12:26.823 "copy": true, 00:12:26.823 "nvme_iov_md": false 00:12:26.823 }, 00:12:26.823 "memory_domains": [ 00:12:26.823 { 00:12:26.823 "dma_device_id": "system", 00:12:26.823 "dma_device_type": 1 00:12:26.823 }, 00:12:26.823 { 00:12:26.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.823 "dma_device_type": 2 00:12:26.823 } 00:12:26.823 ], 00:12:26.823 "driver_specific": {} 00:12:26.823 } 00:12:26.823 ] 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.823 BaseBdev4 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.823 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.823 [ 00:12:26.823 { 00:12:26.823 "name": "BaseBdev4", 00:12:26.823 "aliases": [ 00:12:26.823 "97022fbb-618e-446e-92d0-bd843391804a" 00:12:26.823 ], 00:12:26.823 "product_name": "Malloc disk", 00:12:26.823 "block_size": 512, 00:12:26.823 "num_blocks": 65536, 00:12:26.823 "uuid": "97022fbb-618e-446e-92d0-bd843391804a", 00:12:26.823 "assigned_rate_limits": { 00:12:26.823 "rw_ios_per_sec": 0, 00:12:26.823 "rw_mbytes_per_sec": 0, 00:12:26.823 "r_mbytes_per_sec": 0, 00:12:26.823 "w_mbytes_per_sec": 0 00:12:26.823 }, 00:12:26.823 "claimed": false, 00:12:26.823 "zoned": false, 00:12:26.823 "supported_io_types": { 00:12:26.823 "read": true, 00:12:26.823 "write": true, 00:12:26.823 "unmap": true, 00:12:26.823 "flush": true, 00:12:26.823 "reset": true, 00:12:26.823 "nvme_admin": false, 00:12:26.823 "nvme_io": false, 00:12:26.823 "nvme_io_md": false, 00:12:26.823 "write_zeroes": true, 00:12:26.823 "zcopy": true, 00:12:26.823 "get_zone_info": false, 00:12:26.823 "zone_management": false, 00:12:26.823 "zone_append": false, 00:12:26.823 "compare": false, 00:12:26.823 "compare_and_write": false, 00:12:26.823 "abort": true, 00:12:26.823 "seek_hole": false, 00:12:26.823 "seek_data": false, 00:12:26.823 "copy": true, 00:12:26.823 "nvme_iov_md": false 00:12:26.823 }, 00:12:26.823 "memory_domains": [ 00:12:26.823 { 00:12:26.824 "dma_device_id": "system", 00:12:26.824 "dma_device_type": 1 00:12:26.824 }, 00:12:26.824 { 00:12:26.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.824 "dma_device_type": 2 00:12:26.824 } 00:12:26.824 ], 00:12:26.824 "driver_specific": {} 00:12:26.824 } 00:12:26.824 ] 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.824 [2024-09-30 12:29:38.624219] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:26.824 [2024-09-30 12:29:38.624316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:26.824 [2024-09-30 12:29:38.624359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.824 [2024-09-30 12:29:38.626409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.824 [2024-09-30 12:29:38.626497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.824 "name": "Existed_Raid", 00:12:26.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.824 "strip_size_kb": 0, 00:12:26.824 "state": "configuring", 00:12:26.824 "raid_level": "raid1", 00:12:26.824 "superblock": false, 00:12:26.824 "num_base_bdevs": 4, 00:12:26.824 "num_base_bdevs_discovered": 3, 00:12:26.824 "num_base_bdevs_operational": 4, 00:12:26.824 "base_bdevs_list": [ 00:12:26.824 { 00:12:26.824 "name": "BaseBdev1", 00:12:26.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.824 "is_configured": false, 00:12:26.824 "data_offset": 0, 00:12:26.824 "data_size": 0 00:12:26.824 }, 00:12:26.824 { 00:12:26.824 "name": "BaseBdev2", 00:12:26.824 "uuid": "552ebc3b-00a5-4a4e-9a7e-0c3b3c742dd4", 00:12:26.824 "is_configured": true, 00:12:26.824 "data_offset": 0, 00:12:26.824 "data_size": 65536 00:12:26.824 }, 00:12:26.824 { 00:12:26.824 "name": "BaseBdev3", 00:12:26.824 "uuid": "1925cc98-2a0a-4d48-bbd8-89e97b91c5ca", 00:12:26.824 "is_configured": true, 00:12:26.824 "data_offset": 0, 00:12:26.824 "data_size": 65536 00:12:26.824 }, 00:12:26.824 { 00:12:26.824 "name": "BaseBdev4", 00:12:26.824 "uuid": "97022fbb-618e-446e-92d0-bd843391804a", 00:12:26.824 "is_configured": true, 00:12:26.824 "data_offset": 0, 00:12:26.824 "data_size": 65536 00:12:26.824 } 00:12:26.824 ] 00:12:26.824 }' 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.824 12:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.396 [2024-09-30 12:29:39.007584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.396 "name": "Existed_Raid", 00:12:27.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.396 "strip_size_kb": 0, 00:12:27.396 "state": "configuring", 00:12:27.396 "raid_level": "raid1", 00:12:27.396 "superblock": false, 00:12:27.396 "num_base_bdevs": 4, 00:12:27.396 "num_base_bdevs_discovered": 2, 00:12:27.396 "num_base_bdevs_operational": 4, 00:12:27.396 "base_bdevs_list": [ 00:12:27.396 { 00:12:27.396 "name": "BaseBdev1", 00:12:27.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.396 "is_configured": false, 00:12:27.396 "data_offset": 0, 00:12:27.396 "data_size": 0 00:12:27.396 }, 00:12:27.396 { 00:12:27.396 "name": null, 00:12:27.396 "uuid": "552ebc3b-00a5-4a4e-9a7e-0c3b3c742dd4", 00:12:27.396 "is_configured": false, 00:12:27.396 "data_offset": 0, 00:12:27.396 "data_size": 65536 00:12:27.396 }, 00:12:27.396 { 00:12:27.396 "name": "BaseBdev3", 00:12:27.396 "uuid": "1925cc98-2a0a-4d48-bbd8-89e97b91c5ca", 00:12:27.396 "is_configured": true, 00:12:27.396 "data_offset": 0, 00:12:27.396 "data_size": 65536 00:12:27.396 }, 00:12:27.396 { 00:12:27.396 "name": "BaseBdev4", 00:12:27.396 "uuid": "97022fbb-618e-446e-92d0-bd843391804a", 00:12:27.396 "is_configured": true, 00:12:27.396 "data_offset": 0, 00:12:27.396 "data_size": 65536 00:12:27.396 } 00:12:27.396 ] 00:12:27.396 }' 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.396 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.660 [2024-09-30 12:29:39.488454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.660 BaseBdev1 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.660 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.660 [ 00:12:27.660 { 00:12:27.660 "name": "BaseBdev1", 00:12:27.660 "aliases": [ 00:12:27.660 "6c6f128f-0569-4c2d-82ad-4d5eb0d394a9" 00:12:27.660 ], 00:12:27.660 "product_name": "Malloc disk", 00:12:27.660 "block_size": 512, 00:12:27.660 "num_blocks": 65536, 00:12:27.660 "uuid": "6c6f128f-0569-4c2d-82ad-4d5eb0d394a9", 00:12:27.660 "assigned_rate_limits": { 00:12:27.660 "rw_ios_per_sec": 0, 00:12:27.660 "rw_mbytes_per_sec": 0, 00:12:27.660 "r_mbytes_per_sec": 0, 00:12:27.660 "w_mbytes_per_sec": 0 00:12:27.660 }, 00:12:27.660 "claimed": true, 00:12:27.660 "claim_type": "exclusive_write", 00:12:27.660 "zoned": false, 00:12:27.660 "supported_io_types": { 00:12:27.660 "read": true, 00:12:27.660 "write": true, 00:12:27.660 "unmap": true, 00:12:27.660 "flush": true, 00:12:27.660 "reset": true, 00:12:27.660 "nvme_admin": false, 00:12:27.660 "nvme_io": false, 00:12:27.660 "nvme_io_md": false, 00:12:27.660 "write_zeroes": true, 00:12:27.660 "zcopy": true, 00:12:27.660 "get_zone_info": false, 00:12:27.660 "zone_management": false, 00:12:27.660 "zone_append": false, 00:12:27.660 "compare": false, 00:12:27.660 "compare_and_write": false, 00:12:27.660 "abort": true, 00:12:27.660 "seek_hole": false, 00:12:27.660 "seek_data": false, 00:12:27.660 "copy": true, 00:12:27.661 "nvme_iov_md": false 00:12:27.661 }, 00:12:27.661 "memory_domains": [ 00:12:27.661 { 00:12:27.661 "dma_device_id": "system", 00:12:27.661 "dma_device_type": 1 00:12:27.661 }, 00:12:27.661 { 00:12:27.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.661 "dma_device_type": 2 00:12:27.661 } 00:12:27.661 ], 00:12:27.661 "driver_specific": {} 00:12:27.661 } 00:12:27.661 ] 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.661 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.920 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.920 "name": "Existed_Raid", 00:12:27.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.920 "strip_size_kb": 0, 00:12:27.920 "state": "configuring", 00:12:27.920 "raid_level": "raid1", 00:12:27.920 "superblock": false, 00:12:27.920 "num_base_bdevs": 4, 00:12:27.920 "num_base_bdevs_discovered": 3, 00:12:27.920 "num_base_bdevs_operational": 4, 00:12:27.920 "base_bdevs_list": [ 00:12:27.920 { 00:12:27.920 "name": "BaseBdev1", 00:12:27.920 "uuid": "6c6f128f-0569-4c2d-82ad-4d5eb0d394a9", 00:12:27.920 "is_configured": true, 00:12:27.920 "data_offset": 0, 00:12:27.920 "data_size": 65536 00:12:27.920 }, 00:12:27.920 { 00:12:27.920 "name": null, 00:12:27.920 "uuid": "552ebc3b-00a5-4a4e-9a7e-0c3b3c742dd4", 00:12:27.920 "is_configured": false, 00:12:27.920 "data_offset": 0, 00:12:27.920 "data_size": 65536 00:12:27.920 }, 00:12:27.920 { 00:12:27.920 "name": "BaseBdev3", 00:12:27.920 "uuid": "1925cc98-2a0a-4d48-bbd8-89e97b91c5ca", 00:12:27.920 "is_configured": true, 00:12:27.920 "data_offset": 0, 00:12:27.920 "data_size": 65536 00:12:27.920 }, 00:12:27.920 { 00:12:27.920 "name": "BaseBdev4", 00:12:27.920 "uuid": "97022fbb-618e-446e-92d0-bd843391804a", 00:12:27.920 "is_configured": true, 00:12:27.920 "data_offset": 0, 00:12:27.920 "data_size": 65536 00:12:27.920 } 00:12:27.920 ] 00:12:27.920 }' 00:12:27.920 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.920 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.181 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.181 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:28.181 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.181 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.181 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.181 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:28.181 12:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:28.181 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.181 12:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.181 [2024-09-30 12:29:40.003650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.181 "name": "Existed_Raid", 00:12:28.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.181 "strip_size_kb": 0, 00:12:28.181 "state": "configuring", 00:12:28.181 "raid_level": "raid1", 00:12:28.181 "superblock": false, 00:12:28.181 "num_base_bdevs": 4, 00:12:28.181 "num_base_bdevs_discovered": 2, 00:12:28.181 "num_base_bdevs_operational": 4, 00:12:28.181 "base_bdevs_list": [ 00:12:28.181 { 00:12:28.181 "name": "BaseBdev1", 00:12:28.181 "uuid": "6c6f128f-0569-4c2d-82ad-4d5eb0d394a9", 00:12:28.181 "is_configured": true, 00:12:28.181 "data_offset": 0, 00:12:28.181 "data_size": 65536 00:12:28.181 }, 00:12:28.181 { 00:12:28.181 "name": null, 00:12:28.181 "uuid": "552ebc3b-00a5-4a4e-9a7e-0c3b3c742dd4", 00:12:28.181 "is_configured": false, 00:12:28.181 "data_offset": 0, 00:12:28.181 "data_size": 65536 00:12:28.181 }, 00:12:28.181 { 00:12:28.181 "name": null, 00:12:28.181 "uuid": "1925cc98-2a0a-4d48-bbd8-89e97b91c5ca", 00:12:28.181 "is_configured": false, 00:12:28.181 "data_offset": 0, 00:12:28.181 "data_size": 65536 00:12:28.181 }, 00:12:28.181 { 00:12:28.181 "name": "BaseBdev4", 00:12:28.181 "uuid": "97022fbb-618e-446e-92d0-bd843391804a", 00:12:28.181 "is_configured": true, 00:12:28.181 "data_offset": 0, 00:12:28.181 "data_size": 65536 00:12:28.181 } 00:12:28.181 ] 00:12:28.181 }' 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.181 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.751 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.752 [2024-09-30 12:29:40.454919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.752 "name": "Existed_Raid", 00:12:28.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.752 "strip_size_kb": 0, 00:12:28.752 "state": "configuring", 00:12:28.752 "raid_level": "raid1", 00:12:28.752 "superblock": false, 00:12:28.752 "num_base_bdevs": 4, 00:12:28.752 "num_base_bdevs_discovered": 3, 00:12:28.752 "num_base_bdevs_operational": 4, 00:12:28.752 "base_bdevs_list": [ 00:12:28.752 { 00:12:28.752 "name": "BaseBdev1", 00:12:28.752 "uuid": "6c6f128f-0569-4c2d-82ad-4d5eb0d394a9", 00:12:28.752 "is_configured": true, 00:12:28.752 "data_offset": 0, 00:12:28.752 "data_size": 65536 00:12:28.752 }, 00:12:28.752 { 00:12:28.752 "name": null, 00:12:28.752 "uuid": "552ebc3b-00a5-4a4e-9a7e-0c3b3c742dd4", 00:12:28.752 "is_configured": false, 00:12:28.752 "data_offset": 0, 00:12:28.752 "data_size": 65536 00:12:28.752 }, 00:12:28.752 { 00:12:28.752 "name": "BaseBdev3", 00:12:28.752 "uuid": "1925cc98-2a0a-4d48-bbd8-89e97b91c5ca", 00:12:28.752 "is_configured": true, 00:12:28.752 "data_offset": 0, 00:12:28.752 "data_size": 65536 00:12:28.752 }, 00:12:28.752 { 00:12:28.752 "name": "BaseBdev4", 00:12:28.752 "uuid": "97022fbb-618e-446e-92d0-bd843391804a", 00:12:28.752 "is_configured": true, 00:12:28.752 "data_offset": 0, 00:12:28.752 "data_size": 65536 00:12:28.752 } 00:12:28.752 ] 00:12:28.752 }' 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.752 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.012 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.012 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:29.012 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.012 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.012 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.012 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:29.012 12:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:29.012 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.012 12:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.012 [2024-09-30 12:29:40.898155] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.272 "name": "Existed_Raid", 00:12:29.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.272 "strip_size_kb": 0, 00:12:29.272 "state": "configuring", 00:12:29.272 "raid_level": "raid1", 00:12:29.272 "superblock": false, 00:12:29.272 "num_base_bdevs": 4, 00:12:29.272 "num_base_bdevs_discovered": 2, 00:12:29.272 "num_base_bdevs_operational": 4, 00:12:29.272 "base_bdevs_list": [ 00:12:29.272 { 00:12:29.272 "name": null, 00:12:29.272 "uuid": "6c6f128f-0569-4c2d-82ad-4d5eb0d394a9", 00:12:29.272 "is_configured": false, 00:12:29.272 "data_offset": 0, 00:12:29.272 "data_size": 65536 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "name": null, 00:12:29.272 "uuid": "552ebc3b-00a5-4a4e-9a7e-0c3b3c742dd4", 00:12:29.272 "is_configured": false, 00:12:29.272 "data_offset": 0, 00:12:29.272 "data_size": 65536 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "name": "BaseBdev3", 00:12:29.272 "uuid": "1925cc98-2a0a-4d48-bbd8-89e97b91c5ca", 00:12:29.272 "is_configured": true, 00:12:29.272 "data_offset": 0, 00:12:29.272 "data_size": 65536 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "name": "BaseBdev4", 00:12:29.272 "uuid": "97022fbb-618e-446e-92d0-bd843391804a", 00:12:29.272 "is_configured": true, 00:12:29.272 "data_offset": 0, 00:12:29.272 "data_size": 65536 00:12:29.272 } 00:12:29.272 ] 00:12:29.272 }' 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.272 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.532 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.532 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:29.532 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.532 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.532 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.793 [2024-09-30 12:29:41.443496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.793 "name": "Existed_Raid", 00:12:29.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.793 "strip_size_kb": 0, 00:12:29.793 "state": "configuring", 00:12:29.793 "raid_level": "raid1", 00:12:29.793 "superblock": false, 00:12:29.793 "num_base_bdevs": 4, 00:12:29.793 "num_base_bdevs_discovered": 3, 00:12:29.793 "num_base_bdevs_operational": 4, 00:12:29.793 "base_bdevs_list": [ 00:12:29.793 { 00:12:29.793 "name": null, 00:12:29.793 "uuid": "6c6f128f-0569-4c2d-82ad-4d5eb0d394a9", 00:12:29.793 "is_configured": false, 00:12:29.793 "data_offset": 0, 00:12:29.793 "data_size": 65536 00:12:29.793 }, 00:12:29.793 { 00:12:29.793 "name": "BaseBdev2", 00:12:29.793 "uuid": "552ebc3b-00a5-4a4e-9a7e-0c3b3c742dd4", 00:12:29.793 "is_configured": true, 00:12:29.793 "data_offset": 0, 00:12:29.793 "data_size": 65536 00:12:29.793 }, 00:12:29.793 { 00:12:29.793 "name": "BaseBdev3", 00:12:29.793 "uuid": "1925cc98-2a0a-4d48-bbd8-89e97b91c5ca", 00:12:29.793 "is_configured": true, 00:12:29.793 "data_offset": 0, 00:12:29.793 "data_size": 65536 00:12:29.793 }, 00:12:29.793 { 00:12:29.793 "name": "BaseBdev4", 00:12:29.793 "uuid": "97022fbb-618e-446e-92d0-bd843391804a", 00:12:29.793 "is_configured": true, 00:12:29.793 "data_offset": 0, 00:12:29.793 "data_size": 65536 00:12:29.793 } 00:12:29.793 ] 00:12:29.793 }' 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.793 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:30.053 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.053 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.053 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.053 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:30.053 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.053 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:30.053 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.053 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6c6f128f-0569-4c2d-82ad-4d5eb0d394a9 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.314 [2024-09-30 12:29:41.992532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:30.314 [2024-09-30 12:29:41.992636] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:30.314 [2024-09-30 12:29:41.992665] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:30.314 [2024-09-30 12:29:41.993001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:30.314 [2024-09-30 12:29:41.993213] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:30.314 [2024-09-30 12:29:41.993256] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:30.314 [2024-09-30 12:29:41.993557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.314 NewBaseBdev 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.314 12:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.315 [ 00:12:30.315 { 00:12:30.315 "name": "NewBaseBdev", 00:12:30.315 "aliases": [ 00:12:30.315 "6c6f128f-0569-4c2d-82ad-4d5eb0d394a9" 00:12:30.315 ], 00:12:30.315 "product_name": "Malloc disk", 00:12:30.315 "block_size": 512, 00:12:30.315 "num_blocks": 65536, 00:12:30.315 "uuid": "6c6f128f-0569-4c2d-82ad-4d5eb0d394a9", 00:12:30.315 "assigned_rate_limits": { 00:12:30.315 "rw_ios_per_sec": 0, 00:12:30.315 "rw_mbytes_per_sec": 0, 00:12:30.315 "r_mbytes_per_sec": 0, 00:12:30.315 "w_mbytes_per_sec": 0 00:12:30.315 }, 00:12:30.315 "claimed": true, 00:12:30.315 "claim_type": "exclusive_write", 00:12:30.315 "zoned": false, 00:12:30.315 "supported_io_types": { 00:12:30.315 "read": true, 00:12:30.315 "write": true, 00:12:30.315 "unmap": true, 00:12:30.315 "flush": true, 00:12:30.315 "reset": true, 00:12:30.315 "nvme_admin": false, 00:12:30.315 "nvme_io": false, 00:12:30.315 "nvme_io_md": false, 00:12:30.315 "write_zeroes": true, 00:12:30.315 "zcopy": true, 00:12:30.315 "get_zone_info": false, 00:12:30.315 "zone_management": false, 00:12:30.315 "zone_append": false, 00:12:30.315 "compare": false, 00:12:30.315 "compare_and_write": false, 00:12:30.315 "abort": true, 00:12:30.315 "seek_hole": false, 00:12:30.315 "seek_data": false, 00:12:30.315 "copy": true, 00:12:30.315 "nvme_iov_md": false 00:12:30.315 }, 00:12:30.315 "memory_domains": [ 00:12:30.315 { 00:12:30.315 "dma_device_id": "system", 00:12:30.315 "dma_device_type": 1 00:12:30.315 }, 00:12:30.315 { 00:12:30.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.315 "dma_device_type": 2 00:12:30.315 } 00:12:30.315 ], 00:12:30.315 "driver_specific": {} 00:12:30.315 } 00:12:30.315 ] 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.315 "name": "Existed_Raid", 00:12:30.315 "uuid": "03a07856-1b9a-42ff-bc2a-58d6d6ec0b5c", 00:12:30.315 "strip_size_kb": 0, 00:12:30.315 "state": "online", 00:12:30.315 "raid_level": "raid1", 00:12:30.315 "superblock": false, 00:12:30.315 "num_base_bdevs": 4, 00:12:30.315 "num_base_bdevs_discovered": 4, 00:12:30.315 "num_base_bdevs_operational": 4, 00:12:30.315 "base_bdevs_list": [ 00:12:30.315 { 00:12:30.315 "name": "NewBaseBdev", 00:12:30.315 "uuid": "6c6f128f-0569-4c2d-82ad-4d5eb0d394a9", 00:12:30.315 "is_configured": true, 00:12:30.315 "data_offset": 0, 00:12:30.315 "data_size": 65536 00:12:30.315 }, 00:12:30.315 { 00:12:30.315 "name": "BaseBdev2", 00:12:30.315 "uuid": "552ebc3b-00a5-4a4e-9a7e-0c3b3c742dd4", 00:12:30.315 "is_configured": true, 00:12:30.315 "data_offset": 0, 00:12:30.315 "data_size": 65536 00:12:30.315 }, 00:12:30.315 { 00:12:30.315 "name": "BaseBdev3", 00:12:30.315 "uuid": "1925cc98-2a0a-4d48-bbd8-89e97b91c5ca", 00:12:30.315 "is_configured": true, 00:12:30.315 "data_offset": 0, 00:12:30.315 "data_size": 65536 00:12:30.315 }, 00:12:30.315 { 00:12:30.315 "name": "BaseBdev4", 00:12:30.315 "uuid": "97022fbb-618e-446e-92d0-bd843391804a", 00:12:30.315 "is_configured": true, 00:12:30.315 "data_offset": 0, 00:12:30.315 "data_size": 65536 00:12:30.315 } 00:12:30.315 ] 00:12:30.315 }' 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.315 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.575 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:30.575 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:30.575 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:30.575 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:30.575 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:30.575 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:30.575 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:30.575 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.575 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.836 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:30.836 [2024-09-30 12:29:42.476147] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.836 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.836 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:30.836 "name": "Existed_Raid", 00:12:30.836 "aliases": [ 00:12:30.836 "03a07856-1b9a-42ff-bc2a-58d6d6ec0b5c" 00:12:30.836 ], 00:12:30.836 "product_name": "Raid Volume", 00:12:30.836 "block_size": 512, 00:12:30.836 "num_blocks": 65536, 00:12:30.836 "uuid": "03a07856-1b9a-42ff-bc2a-58d6d6ec0b5c", 00:12:30.836 "assigned_rate_limits": { 00:12:30.836 "rw_ios_per_sec": 0, 00:12:30.836 "rw_mbytes_per_sec": 0, 00:12:30.836 "r_mbytes_per_sec": 0, 00:12:30.836 "w_mbytes_per_sec": 0 00:12:30.836 }, 00:12:30.836 "claimed": false, 00:12:30.836 "zoned": false, 00:12:30.836 "supported_io_types": { 00:12:30.836 "read": true, 00:12:30.836 "write": true, 00:12:30.836 "unmap": false, 00:12:30.836 "flush": false, 00:12:30.836 "reset": true, 00:12:30.836 "nvme_admin": false, 00:12:30.836 "nvme_io": false, 00:12:30.836 "nvme_io_md": false, 00:12:30.837 "write_zeroes": true, 00:12:30.837 "zcopy": false, 00:12:30.837 "get_zone_info": false, 00:12:30.837 "zone_management": false, 00:12:30.837 "zone_append": false, 00:12:30.837 "compare": false, 00:12:30.837 "compare_and_write": false, 00:12:30.837 "abort": false, 00:12:30.837 "seek_hole": false, 00:12:30.837 "seek_data": false, 00:12:30.837 "copy": false, 00:12:30.837 "nvme_iov_md": false 00:12:30.837 }, 00:12:30.837 "memory_domains": [ 00:12:30.837 { 00:12:30.837 "dma_device_id": "system", 00:12:30.837 "dma_device_type": 1 00:12:30.837 }, 00:12:30.837 { 00:12:30.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.837 "dma_device_type": 2 00:12:30.837 }, 00:12:30.837 { 00:12:30.837 "dma_device_id": "system", 00:12:30.837 "dma_device_type": 1 00:12:30.837 }, 00:12:30.837 { 00:12:30.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.837 "dma_device_type": 2 00:12:30.837 }, 00:12:30.837 { 00:12:30.837 "dma_device_id": "system", 00:12:30.837 "dma_device_type": 1 00:12:30.837 }, 00:12:30.837 { 00:12:30.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.837 "dma_device_type": 2 00:12:30.837 }, 00:12:30.837 { 00:12:30.837 "dma_device_id": "system", 00:12:30.837 "dma_device_type": 1 00:12:30.837 }, 00:12:30.837 { 00:12:30.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.837 "dma_device_type": 2 00:12:30.837 } 00:12:30.837 ], 00:12:30.837 "driver_specific": { 00:12:30.837 "raid": { 00:12:30.837 "uuid": "03a07856-1b9a-42ff-bc2a-58d6d6ec0b5c", 00:12:30.837 "strip_size_kb": 0, 00:12:30.837 "state": "online", 00:12:30.837 "raid_level": "raid1", 00:12:30.837 "superblock": false, 00:12:30.837 "num_base_bdevs": 4, 00:12:30.837 "num_base_bdevs_discovered": 4, 00:12:30.837 "num_base_bdevs_operational": 4, 00:12:30.837 "base_bdevs_list": [ 00:12:30.837 { 00:12:30.837 "name": "NewBaseBdev", 00:12:30.837 "uuid": "6c6f128f-0569-4c2d-82ad-4d5eb0d394a9", 00:12:30.837 "is_configured": true, 00:12:30.837 "data_offset": 0, 00:12:30.837 "data_size": 65536 00:12:30.837 }, 00:12:30.837 { 00:12:30.837 "name": "BaseBdev2", 00:12:30.837 "uuid": "552ebc3b-00a5-4a4e-9a7e-0c3b3c742dd4", 00:12:30.837 "is_configured": true, 00:12:30.837 "data_offset": 0, 00:12:30.837 "data_size": 65536 00:12:30.837 }, 00:12:30.837 { 00:12:30.837 "name": "BaseBdev3", 00:12:30.837 "uuid": "1925cc98-2a0a-4d48-bbd8-89e97b91c5ca", 00:12:30.837 "is_configured": true, 00:12:30.837 "data_offset": 0, 00:12:30.837 "data_size": 65536 00:12:30.837 }, 00:12:30.837 { 00:12:30.837 "name": "BaseBdev4", 00:12:30.837 "uuid": "97022fbb-618e-446e-92d0-bd843391804a", 00:12:30.837 "is_configured": true, 00:12:30.837 "data_offset": 0, 00:12:30.837 "data_size": 65536 00:12:30.837 } 00:12:30.837 ] 00:12:30.837 } 00:12:30.837 } 00:12:30.837 }' 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:30.837 BaseBdev2 00:12:30.837 BaseBdev3 00:12:30.837 BaseBdev4' 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.837 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.098 [2024-09-30 12:29:42.779315] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.098 [2024-09-30 12:29:42.779418] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.098 [2024-09-30 12:29:42.779536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.098 [2024-09-30 12:29:42.779882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.098 [2024-09-30 12:29:42.779942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73058 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73058 ']' 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73058 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:31.098 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73058 00:12:31.099 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:31.099 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:31.099 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73058' 00:12:31.099 killing process with pid 73058 00:12:31.099 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73058 00:12:31.099 [2024-09-30 12:29:42.827251] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.099 12:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73058 00:12:31.358 [2024-09-30 12:29:43.249118] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.782 12:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:32.782 00:12:32.782 real 0m11.505s 00:12:32.782 user 0m17.850s 00:12:32.782 sys 0m2.094s 00:12:32.782 12:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.782 12:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.782 ************************************ 00:12:32.782 END TEST raid_state_function_test 00:12:32.782 ************************************ 00:12:32.782 12:29:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:32.782 12:29:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:32.782 12:29:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.782 12:29:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.782 ************************************ 00:12:32.782 START TEST raid_state_function_test_sb 00:12:32.782 ************************************ 00:12:32.782 12:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73729 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73729' 00:12:33.043 Process raid pid: 73729 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73729 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73729 ']' 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.043 12:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.043 [2024-09-30 12:29:44.776283] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:33.043 [2024-09-30 12:29:44.776483] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.302 [2024-09-30 12:29:44.941109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.302 [2024-09-30 12:29:45.188434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.561 [2024-09-30 12:29:45.422315] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.561 [2024-09-30 12:29:45.422418] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.820 12:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.820 12:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:33.820 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:33.820 12:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.820 12:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.820 [2024-09-30 12:29:45.596825] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.820 [2024-09-30 12:29:45.596879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.820 [2024-09-30 12:29:45.596890] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.820 [2024-09-30 12:29:45.596900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.820 [2024-09-30 12:29:45.596906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:33.820 [2024-09-30 12:29:45.596917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:33.820 [2024-09-30 12:29:45.596923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:33.820 [2024-09-30 12:29:45.596932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.821 "name": "Existed_Raid", 00:12:33.821 "uuid": "3c4a77ea-ab81-4e6d-ada2-2921d4dfc34a", 00:12:33.821 "strip_size_kb": 0, 00:12:33.821 "state": "configuring", 00:12:33.821 "raid_level": "raid1", 00:12:33.821 "superblock": true, 00:12:33.821 "num_base_bdevs": 4, 00:12:33.821 "num_base_bdevs_discovered": 0, 00:12:33.821 "num_base_bdevs_operational": 4, 00:12:33.821 "base_bdevs_list": [ 00:12:33.821 { 00:12:33.821 "name": "BaseBdev1", 00:12:33.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.821 "is_configured": false, 00:12:33.821 "data_offset": 0, 00:12:33.821 "data_size": 0 00:12:33.821 }, 00:12:33.821 { 00:12:33.821 "name": "BaseBdev2", 00:12:33.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.821 "is_configured": false, 00:12:33.821 "data_offset": 0, 00:12:33.821 "data_size": 0 00:12:33.821 }, 00:12:33.821 { 00:12:33.821 "name": "BaseBdev3", 00:12:33.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.821 "is_configured": false, 00:12:33.821 "data_offset": 0, 00:12:33.821 "data_size": 0 00:12:33.821 }, 00:12:33.821 { 00:12:33.821 "name": "BaseBdev4", 00:12:33.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.821 "is_configured": false, 00:12:33.821 "data_offset": 0, 00:12:33.821 "data_size": 0 00:12:33.821 } 00:12:33.821 ] 00:12:33.821 }' 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.821 12:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.388 [2024-09-30 12:29:46.039930] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:34.388 [2024-09-30 12:29:46.040018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.388 [2024-09-30 12:29:46.051945] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:34.388 [2024-09-30 12:29:46.052021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:34.388 [2024-09-30 12:29:46.052050] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:34.388 [2024-09-30 12:29:46.052073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:34.388 [2024-09-30 12:29:46.052091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:34.388 [2024-09-30 12:29:46.052112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:34.388 [2024-09-30 12:29:46.052129] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:34.388 [2024-09-30 12:29:46.052149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.388 [2024-09-30 12:29:46.123460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.388 BaseBdev1 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.388 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.389 [ 00:12:34.389 { 00:12:34.389 "name": "BaseBdev1", 00:12:34.389 "aliases": [ 00:12:34.389 "008cce5c-98a5-487b-87d1-89fb527c2bf6" 00:12:34.389 ], 00:12:34.389 "product_name": "Malloc disk", 00:12:34.389 "block_size": 512, 00:12:34.389 "num_blocks": 65536, 00:12:34.389 "uuid": "008cce5c-98a5-487b-87d1-89fb527c2bf6", 00:12:34.389 "assigned_rate_limits": { 00:12:34.389 "rw_ios_per_sec": 0, 00:12:34.389 "rw_mbytes_per_sec": 0, 00:12:34.389 "r_mbytes_per_sec": 0, 00:12:34.389 "w_mbytes_per_sec": 0 00:12:34.389 }, 00:12:34.389 "claimed": true, 00:12:34.389 "claim_type": "exclusive_write", 00:12:34.389 "zoned": false, 00:12:34.389 "supported_io_types": { 00:12:34.389 "read": true, 00:12:34.389 "write": true, 00:12:34.389 "unmap": true, 00:12:34.389 "flush": true, 00:12:34.389 "reset": true, 00:12:34.389 "nvme_admin": false, 00:12:34.389 "nvme_io": false, 00:12:34.389 "nvme_io_md": false, 00:12:34.389 "write_zeroes": true, 00:12:34.389 "zcopy": true, 00:12:34.389 "get_zone_info": false, 00:12:34.389 "zone_management": false, 00:12:34.389 "zone_append": false, 00:12:34.389 "compare": false, 00:12:34.389 "compare_and_write": false, 00:12:34.389 "abort": true, 00:12:34.389 "seek_hole": false, 00:12:34.389 "seek_data": false, 00:12:34.389 "copy": true, 00:12:34.389 "nvme_iov_md": false 00:12:34.389 }, 00:12:34.389 "memory_domains": [ 00:12:34.389 { 00:12:34.389 "dma_device_id": "system", 00:12:34.389 "dma_device_type": 1 00:12:34.389 }, 00:12:34.389 { 00:12:34.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.389 "dma_device_type": 2 00:12:34.389 } 00:12:34.389 ], 00:12:34.389 "driver_specific": {} 00:12:34.389 } 00:12:34.389 ] 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.389 "name": "Existed_Raid", 00:12:34.389 "uuid": "31cc0028-b22f-4f21-9fee-33ac1e7d6352", 00:12:34.389 "strip_size_kb": 0, 00:12:34.389 "state": "configuring", 00:12:34.389 "raid_level": "raid1", 00:12:34.389 "superblock": true, 00:12:34.389 "num_base_bdevs": 4, 00:12:34.389 "num_base_bdevs_discovered": 1, 00:12:34.389 "num_base_bdevs_operational": 4, 00:12:34.389 "base_bdevs_list": [ 00:12:34.389 { 00:12:34.389 "name": "BaseBdev1", 00:12:34.389 "uuid": "008cce5c-98a5-487b-87d1-89fb527c2bf6", 00:12:34.389 "is_configured": true, 00:12:34.389 "data_offset": 2048, 00:12:34.389 "data_size": 63488 00:12:34.389 }, 00:12:34.389 { 00:12:34.389 "name": "BaseBdev2", 00:12:34.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.389 "is_configured": false, 00:12:34.389 "data_offset": 0, 00:12:34.389 "data_size": 0 00:12:34.389 }, 00:12:34.389 { 00:12:34.389 "name": "BaseBdev3", 00:12:34.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.389 "is_configured": false, 00:12:34.389 "data_offset": 0, 00:12:34.389 "data_size": 0 00:12:34.389 }, 00:12:34.389 { 00:12:34.389 "name": "BaseBdev4", 00:12:34.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.389 "is_configured": false, 00:12:34.389 "data_offset": 0, 00:12:34.389 "data_size": 0 00:12:34.389 } 00:12:34.389 ] 00:12:34.389 }' 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.389 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.958 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:34.958 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.958 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.958 [2024-09-30 12:29:46.634622] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:34.958 [2024-09-30 12:29:46.634669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:34.958 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.958 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:34.958 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.958 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.958 [2024-09-30 12:29:46.646662] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.958 [2024-09-30 12:29:46.648807] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:34.958 [2024-09-30 12:29:46.648884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:34.958 [2024-09-30 12:29:46.648916] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:34.958 [2024-09-30 12:29:46.648956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:34.958 [2024-09-30 12:29:46.648984] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:34.958 [2024-09-30 12:29:46.649007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.959 "name": "Existed_Raid", 00:12:34.959 "uuid": "4a4649d9-d116-41df-a962-a85490df3abb", 00:12:34.959 "strip_size_kb": 0, 00:12:34.959 "state": "configuring", 00:12:34.959 "raid_level": "raid1", 00:12:34.959 "superblock": true, 00:12:34.959 "num_base_bdevs": 4, 00:12:34.959 "num_base_bdevs_discovered": 1, 00:12:34.959 "num_base_bdevs_operational": 4, 00:12:34.959 "base_bdevs_list": [ 00:12:34.959 { 00:12:34.959 "name": "BaseBdev1", 00:12:34.959 "uuid": "008cce5c-98a5-487b-87d1-89fb527c2bf6", 00:12:34.959 "is_configured": true, 00:12:34.959 "data_offset": 2048, 00:12:34.959 "data_size": 63488 00:12:34.959 }, 00:12:34.959 { 00:12:34.959 "name": "BaseBdev2", 00:12:34.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.959 "is_configured": false, 00:12:34.959 "data_offset": 0, 00:12:34.959 "data_size": 0 00:12:34.959 }, 00:12:34.959 { 00:12:34.959 "name": "BaseBdev3", 00:12:34.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.959 "is_configured": false, 00:12:34.959 "data_offset": 0, 00:12:34.959 "data_size": 0 00:12:34.959 }, 00:12:34.959 { 00:12:34.959 "name": "BaseBdev4", 00:12:34.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.959 "is_configured": false, 00:12:34.959 "data_offset": 0, 00:12:34.959 "data_size": 0 00:12:34.959 } 00:12:34.959 ] 00:12:34.959 }' 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.959 12:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.218 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:35.218 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.218 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.477 [2024-09-30 12:29:47.125931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.477 BaseBdev2 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.477 [ 00:12:35.477 { 00:12:35.477 "name": "BaseBdev2", 00:12:35.477 "aliases": [ 00:12:35.477 "ddcc3c8a-cfcd-478b-9e25-0c8732e6a823" 00:12:35.477 ], 00:12:35.477 "product_name": "Malloc disk", 00:12:35.477 "block_size": 512, 00:12:35.477 "num_blocks": 65536, 00:12:35.477 "uuid": "ddcc3c8a-cfcd-478b-9e25-0c8732e6a823", 00:12:35.477 "assigned_rate_limits": { 00:12:35.477 "rw_ios_per_sec": 0, 00:12:35.477 "rw_mbytes_per_sec": 0, 00:12:35.477 "r_mbytes_per_sec": 0, 00:12:35.477 "w_mbytes_per_sec": 0 00:12:35.477 }, 00:12:35.477 "claimed": true, 00:12:35.477 "claim_type": "exclusive_write", 00:12:35.477 "zoned": false, 00:12:35.477 "supported_io_types": { 00:12:35.477 "read": true, 00:12:35.477 "write": true, 00:12:35.477 "unmap": true, 00:12:35.477 "flush": true, 00:12:35.477 "reset": true, 00:12:35.477 "nvme_admin": false, 00:12:35.477 "nvme_io": false, 00:12:35.477 "nvme_io_md": false, 00:12:35.477 "write_zeroes": true, 00:12:35.477 "zcopy": true, 00:12:35.477 "get_zone_info": false, 00:12:35.477 "zone_management": false, 00:12:35.477 "zone_append": false, 00:12:35.477 "compare": false, 00:12:35.477 "compare_and_write": false, 00:12:35.477 "abort": true, 00:12:35.477 "seek_hole": false, 00:12:35.477 "seek_data": false, 00:12:35.477 "copy": true, 00:12:35.477 "nvme_iov_md": false 00:12:35.477 }, 00:12:35.477 "memory_domains": [ 00:12:35.477 { 00:12:35.477 "dma_device_id": "system", 00:12:35.477 "dma_device_type": 1 00:12:35.477 }, 00:12:35.477 { 00:12:35.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.477 "dma_device_type": 2 00:12:35.477 } 00:12:35.477 ], 00:12:35.477 "driver_specific": {} 00:12:35.477 } 00:12:35.477 ] 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.477 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.477 "name": "Existed_Raid", 00:12:35.477 "uuid": "4a4649d9-d116-41df-a962-a85490df3abb", 00:12:35.477 "strip_size_kb": 0, 00:12:35.477 "state": "configuring", 00:12:35.477 "raid_level": "raid1", 00:12:35.477 "superblock": true, 00:12:35.477 "num_base_bdevs": 4, 00:12:35.477 "num_base_bdevs_discovered": 2, 00:12:35.477 "num_base_bdevs_operational": 4, 00:12:35.477 "base_bdevs_list": [ 00:12:35.477 { 00:12:35.477 "name": "BaseBdev1", 00:12:35.477 "uuid": "008cce5c-98a5-487b-87d1-89fb527c2bf6", 00:12:35.477 "is_configured": true, 00:12:35.477 "data_offset": 2048, 00:12:35.477 "data_size": 63488 00:12:35.477 }, 00:12:35.477 { 00:12:35.477 "name": "BaseBdev2", 00:12:35.477 "uuid": "ddcc3c8a-cfcd-478b-9e25-0c8732e6a823", 00:12:35.477 "is_configured": true, 00:12:35.477 "data_offset": 2048, 00:12:35.477 "data_size": 63488 00:12:35.477 }, 00:12:35.477 { 00:12:35.477 "name": "BaseBdev3", 00:12:35.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.477 "is_configured": false, 00:12:35.477 "data_offset": 0, 00:12:35.477 "data_size": 0 00:12:35.477 }, 00:12:35.477 { 00:12:35.477 "name": "BaseBdev4", 00:12:35.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.477 "is_configured": false, 00:12:35.477 "data_offset": 0, 00:12:35.478 "data_size": 0 00:12:35.478 } 00:12:35.478 ] 00:12:35.478 }' 00:12:35.478 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.478 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.045 [2024-09-30 12:29:47.676109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.045 BaseBdev3 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.045 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.046 [ 00:12:36.046 { 00:12:36.046 "name": "BaseBdev3", 00:12:36.046 "aliases": [ 00:12:36.046 "1abd60f1-bc4a-4030-b8d8-74e9b50757cc" 00:12:36.046 ], 00:12:36.046 "product_name": "Malloc disk", 00:12:36.046 "block_size": 512, 00:12:36.046 "num_blocks": 65536, 00:12:36.046 "uuid": "1abd60f1-bc4a-4030-b8d8-74e9b50757cc", 00:12:36.046 "assigned_rate_limits": { 00:12:36.046 "rw_ios_per_sec": 0, 00:12:36.046 "rw_mbytes_per_sec": 0, 00:12:36.046 "r_mbytes_per_sec": 0, 00:12:36.046 "w_mbytes_per_sec": 0 00:12:36.046 }, 00:12:36.046 "claimed": true, 00:12:36.046 "claim_type": "exclusive_write", 00:12:36.046 "zoned": false, 00:12:36.046 "supported_io_types": { 00:12:36.046 "read": true, 00:12:36.046 "write": true, 00:12:36.046 "unmap": true, 00:12:36.046 "flush": true, 00:12:36.046 "reset": true, 00:12:36.046 "nvme_admin": false, 00:12:36.046 "nvme_io": false, 00:12:36.046 "nvme_io_md": false, 00:12:36.046 "write_zeroes": true, 00:12:36.046 "zcopy": true, 00:12:36.046 "get_zone_info": false, 00:12:36.046 "zone_management": false, 00:12:36.046 "zone_append": false, 00:12:36.046 "compare": false, 00:12:36.046 "compare_and_write": false, 00:12:36.046 "abort": true, 00:12:36.046 "seek_hole": false, 00:12:36.046 "seek_data": false, 00:12:36.046 "copy": true, 00:12:36.046 "nvme_iov_md": false 00:12:36.046 }, 00:12:36.046 "memory_domains": [ 00:12:36.046 { 00:12:36.046 "dma_device_id": "system", 00:12:36.046 "dma_device_type": 1 00:12:36.046 }, 00:12:36.046 { 00:12:36.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.046 "dma_device_type": 2 00:12:36.046 } 00:12:36.046 ], 00:12:36.046 "driver_specific": {} 00:12:36.046 } 00:12:36.046 ] 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.046 "name": "Existed_Raid", 00:12:36.046 "uuid": "4a4649d9-d116-41df-a962-a85490df3abb", 00:12:36.046 "strip_size_kb": 0, 00:12:36.046 "state": "configuring", 00:12:36.046 "raid_level": "raid1", 00:12:36.046 "superblock": true, 00:12:36.046 "num_base_bdevs": 4, 00:12:36.046 "num_base_bdevs_discovered": 3, 00:12:36.046 "num_base_bdevs_operational": 4, 00:12:36.046 "base_bdevs_list": [ 00:12:36.046 { 00:12:36.046 "name": "BaseBdev1", 00:12:36.046 "uuid": "008cce5c-98a5-487b-87d1-89fb527c2bf6", 00:12:36.046 "is_configured": true, 00:12:36.046 "data_offset": 2048, 00:12:36.046 "data_size": 63488 00:12:36.046 }, 00:12:36.046 { 00:12:36.046 "name": "BaseBdev2", 00:12:36.046 "uuid": "ddcc3c8a-cfcd-478b-9e25-0c8732e6a823", 00:12:36.046 "is_configured": true, 00:12:36.046 "data_offset": 2048, 00:12:36.046 "data_size": 63488 00:12:36.046 }, 00:12:36.046 { 00:12:36.046 "name": "BaseBdev3", 00:12:36.046 "uuid": "1abd60f1-bc4a-4030-b8d8-74e9b50757cc", 00:12:36.046 "is_configured": true, 00:12:36.046 "data_offset": 2048, 00:12:36.046 "data_size": 63488 00:12:36.046 }, 00:12:36.046 { 00:12:36.046 "name": "BaseBdev4", 00:12:36.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.046 "is_configured": false, 00:12:36.046 "data_offset": 0, 00:12:36.046 "data_size": 0 00:12:36.046 } 00:12:36.046 ] 00:12:36.046 }' 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.046 12:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.306 [2024-09-30 12:29:48.177071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:36.306 [2024-09-30 12:29:48.177481] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:36.306 [2024-09-30 12:29:48.177544] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.306 [2024-09-30 12:29:48.177906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:36.306 BaseBdev4 00:12:36.306 [2024-09-30 12:29:48.178147] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:36.306 [2024-09-30 12:29:48.178167] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:36.306 [2024-09-30 12:29:48.178341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.306 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.566 [ 00:12:36.566 { 00:12:36.566 "name": "BaseBdev4", 00:12:36.566 "aliases": [ 00:12:36.566 "02e66db7-be74-4a1f-ba86-4add2d93a333" 00:12:36.566 ], 00:12:36.566 "product_name": "Malloc disk", 00:12:36.566 "block_size": 512, 00:12:36.566 "num_blocks": 65536, 00:12:36.566 "uuid": "02e66db7-be74-4a1f-ba86-4add2d93a333", 00:12:36.566 "assigned_rate_limits": { 00:12:36.566 "rw_ios_per_sec": 0, 00:12:36.566 "rw_mbytes_per_sec": 0, 00:12:36.566 "r_mbytes_per_sec": 0, 00:12:36.566 "w_mbytes_per_sec": 0 00:12:36.566 }, 00:12:36.566 "claimed": true, 00:12:36.566 "claim_type": "exclusive_write", 00:12:36.566 "zoned": false, 00:12:36.566 "supported_io_types": { 00:12:36.566 "read": true, 00:12:36.566 "write": true, 00:12:36.566 "unmap": true, 00:12:36.566 "flush": true, 00:12:36.566 "reset": true, 00:12:36.566 "nvme_admin": false, 00:12:36.566 "nvme_io": false, 00:12:36.566 "nvme_io_md": false, 00:12:36.566 "write_zeroes": true, 00:12:36.566 "zcopy": true, 00:12:36.566 "get_zone_info": false, 00:12:36.566 "zone_management": false, 00:12:36.566 "zone_append": false, 00:12:36.566 "compare": false, 00:12:36.566 "compare_and_write": false, 00:12:36.566 "abort": true, 00:12:36.566 "seek_hole": false, 00:12:36.566 "seek_data": false, 00:12:36.566 "copy": true, 00:12:36.566 "nvme_iov_md": false 00:12:36.566 }, 00:12:36.566 "memory_domains": [ 00:12:36.566 { 00:12:36.566 "dma_device_id": "system", 00:12:36.566 "dma_device_type": 1 00:12:36.566 }, 00:12:36.566 { 00:12:36.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.566 "dma_device_type": 2 00:12:36.566 } 00:12:36.566 ], 00:12:36.566 "driver_specific": {} 00:12:36.566 } 00:12:36.566 ] 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.566 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.566 "name": "Existed_Raid", 00:12:36.566 "uuid": "4a4649d9-d116-41df-a962-a85490df3abb", 00:12:36.566 "strip_size_kb": 0, 00:12:36.566 "state": "online", 00:12:36.566 "raid_level": "raid1", 00:12:36.566 "superblock": true, 00:12:36.566 "num_base_bdevs": 4, 00:12:36.566 "num_base_bdevs_discovered": 4, 00:12:36.566 "num_base_bdevs_operational": 4, 00:12:36.566 "base_bdevs_list": [ 00:12:36.566 { 00:12:36.566 "name": "BaseBdev1", 00:12:36.566 "uuid": "008cce5c-98a5-487b-87d1-89fb527c2bf6", 00:12:36.566 "is_configured": true, 00:12:36.566 "data_offset": 2048, 00:12:36.566 "data_size": 63488 00:12:36.566 }, 00:12:36.566 { 00:12:36.566 "name": "BaseBdev2", 00:12:36.566 "uuid": "ddcc3c8a-cfcd-478b-9e25-0c8732e6a823", 00:12:36.566 "is_configured": true, 00:12:36.566 "data_offset": 2048, 00:12:36.566 "data_size": 63488 00:12:36.566 }, 00:12:36.566 { 00:12:36.566 "name": "BaseBdev3", 00:12:36.566 "uuid": "1abd60f1-bc4a-4030-b8d8-74e9b50757cc", 00:12:36.566 "is_configured": true, 00:12:36.567 "data_offset": 2048, 00:12:36.567 "data_size": 63488 00:12:36.567 }, 00:12:36.567 { 00:12:36.567 "name": "BaseBdev4", 00:12:36.567 "uuid": "02e66db7-be74-4a1f-ba86-4add2d93a333", 00:12:36.567 "is_configured": true, 00:12:36.567 "data_offset": 2048, 00:12:36.567 "data_size": 63488 00:12:36.567 } 00:12:36.567 ] 00:12:36.567 }' 00:12:36.567 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.567 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:36.826 [2024-09-30 12:29:48.676652] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.826 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:36.826 "name": "Existed_Raid", 00:12:36.826 "aliases": [ 00:12:36.826 "4a4649d9-d116-41df-a962-a85490df3abb" 00:12:36.826 ], 00:12:36.826 "product_name": "Raid Volume", 00:12:36.826 "block_size": 512, 00:12:36.826 "num_blocks": 63488, 00:12:36.826 "uuid": "4a4649d9-d116-41df-a962-a85490df3abb", 00:12:36.827 "assigned_rate_limits": { 00:12:36.827 "rw_ios_per_sec": 0, 00:12:36.827 "rw_mbytes_per_sec": 0, 00:12:36.827 "r_mbytes_per_sec": 0, 00:12:36.827 "w_mbytes_per_sec": 0 00:12:36.827 }, 00:12:36.827 "claimed": false, 00:12:36.827 "zoned": false, 00:12:36.827 "supported_io_types": { 00:12:36.827 "read": true, 00:12:36.827 "write": true, 00:12:36.827 "unmap": false, 00:12:36.827 "flush": false, 00:12:36.827 "reset": true, 00:12:36.827 "nvme_admin": false, 00:12:36.827 "nvme_io": false, 00:12:36.827 "nvme_io_md": false, 00:12:36.827 "write_zeroes": true, 00:12:36.827 "zcopy": false, 00:12:36.827 "get_zone_info": false, 00:12:36.827 "zone_management": false, 00:12:36.827 "zone_append": false, 00:12:36.827 "compare": false, 00:12:36.827 "compare_and_write": false, 00:12:36.827 "abort": false, 00:12:36.827 "seek_hole": false, 00:12:36.827 "seek_data": false, 00:12:36.827 "copy": false, 00:12:36.827 "nvme_iov_md": false 00:12:36.827 }, 00:12:36.827 "memory_domains": [ 00:12:36.827 { 00:12:36.827 "dma_device_id": "system", 00:12:36.827 "dma_device_type": 1 00:12:36.827 }, 00:12:36.827 { 00:12:36.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.827 "dma_device_type": 2 00:12:36.827 }, 00:12:36.827 { 00:12:36.827 "dma_device_id": "system", 00:12:36.827 "dma_device_type": 1 00:12:36.827 }, 00:12:36.827 { 00:12:36.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.827 "dma_device_type": 2 00:12:36.827 }, 00:12:36.827 { 00:12:36.827 "dma_device_id": "system", 00:12:36.827 "dma_device_type": 1 00:12:36.827 }, 00:12:36.827 { 00:12:36.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.827 "dma_device_type": 2 00:12:36.827 }, 00:12:36.827 { 00:12:36.827 "dma_device_id": "system", 00:12:36.827 "dma_device_type": 1 00:12:36.827 }, 00:12:36.827 { 00:12:36.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.827 "dma_device_type": 2 00:12:36.827 } 00:12:36.827 ], 00:12:36.827 "driver_specific": { 00:12:36.827 "raid": { 00:12:36.827 "uuid": "4a4649d9-d116-41df-a962-a85490df3abb", 00:12:36.827 "strip_size_kb": 0, 00:12:36.827 "state": "online", 00:12:36.827 "raid_level": "raid1", 00:12:36.827 "superblock": true, 00:12:36.827 "num_base_bdevs": 4, 00:12:36.827 "num_base_bdevs_discovered": 4, 00:12:36.827 "num_base_bdevs_operational": 4, 00:12:36.827 "base_bdevs_list": [ 00:12:36.827 { 00:12:36.827 "name": "BaseBdev1", 00:12:36.827 "uuid": "008cce5c-98a5-487b-87d1-89fb527c2bf6", 00:12:36.827 "is_configured": true, 00:12:36.827 "data_offset": 2048, 00:12:36.827 "data_size": 63488 00:12:36.827 }, 00:12:36.827 { 00:12:36.827 "name": "BaseBdev2", 00:12:36.827 "uuid": "ddcc3c8a-cfcd-478b-9e25-0c8732e6a823", 00:12:36.827 "is_configured": true, 00:12:36.827 "data_offset": 2048, 00:12:36.827 "data_size": 63488 00:12:36.827 }, 00:12:36.827 { 00:12:36.827 "name": "BaseBdev3", 00:12:36.827 "uuid": "1abd60f1-bc4a-4030-b8d8-74e9b50757cc", 00:12:36.827 "is_configured": true, 00:12:36.827 "data_offset": 2048, 00:12:36.827 "data_size": 63488 00:12:36.827 }, 00:12:36.827 { 00:12:36.827 "name": "BaseBdev4", 00:12:36.827 "uuid": "02e66db7-be74-4a1f-ba86-4add2d93a333", 00:12:36.827 "is_configured": true, 00:12:36.827 "data_offset": 2048, 00:12:36.827 "data_size": 63488 00:12:36.827 } 00:12:36.827 ] 00:12:36.827 } 00:12:36.827 } 00:12:36.827 }' 00:12:36.827 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:37.086 BaseBdev2 00:12:37.086 BaseBdev3 00:12:37.086 BaseBdev4' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.086 12:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.086 [2024-09-30 12:29:48.955882] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.350 "name": "Existed_Raid", 00:12:37.350 "uuid": "4a4649d9-d116-41df-a962-a85490df3abb", 00:12:37.350 "strip_size_kb": 0, 00:12:37.350 "state": "online", 00:12:37.350 "raid_level": "raid1", 00:12:37.350 "superblock": true, 00:12:37.350 "num_base_bdevs": 4, 00:12:37.350 "num_base_bdevs_discovered": 3, 00:12:37.350 "num_base_bdevs_operational": 3, 00:12:37.350 "base_bdevs_list": [ 00:12:37.350 { 00:12:37.350 "name": null, 00:12:37.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.350 "is_configured": false, 00:12:37.350 "data_offset": 0, 00:12:37.350 "data_size": 63488 00:12:37.350 }, 00:12:37.350 { 00:12:37.350 "name": "BaseBdev2", 00:12:37.350 "uuid": "ddcc3c8a-cfcd-478b-9e25-0c8732e6a823", 00:12:37.350 "is_configured": true, 00:12:37.350 "data_offset": 2048, 00:12:37.350 "data_size": 63488 00:12:37.350 }, 00:12:37.350 { 00:12:37.350 "name": "BaseBdev3", 00:12:37.350 "uuid": "1abd60f1-bc4a-4030-b8d8-74e9b50757cc", 00:12:37.350 "is_configured": true, 00:12:37.350 "data_offset": 2048, 00:12:37.350 "data_size": 63488 00:12:37.350 }, 00:12:37.350 { 00:12:37.350 "name": "BaseBdev4", 00:12:37.350 "uuid": "02e66db7-be74-4a1f-ba86-4add2d93a333", 00:12:37.350 "is_configured": true, 00:12:37.350 "data_offset": 2048, 00:12:37.350 "data_size": 63488 00:12:37.350 } 00:12:37.350 ] 00:12:37.350 }' 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.350 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.922 [2024-09-30 12:29:49.564925] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.922 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.922 [2024-09-30 12:29:49.727161] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.182 [2024-09-30 12:29:49.881700] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:38.182 [2024-09-30 12:29:49.881883] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.182 [2024-09-30 12:29:49.983223] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.182 [2024-09-30 12:29:49.983363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.182 [2024-09-30 12:29:49.983425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.182 12:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.182 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.182 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:38.182 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:38.182 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:38.182 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:38.182 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:38.182 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:38.182 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.182 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.442 BaseBdev2 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.442 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.442 [ 00:12:38.442 { 00:12:38.443 "name": "BaseBdev2", 00:12:38.443 "aliases": [ 00:12:38.443 "47dcd822-2517-4412-a8a8-07097b196414" 00:12:38.443 ], 00:12:38.443 "product_name": "Malloc disk", 00:12:38.443 "block_size": 512, 00:12:38.443 "num_blocks": 65536, 00:12:38.443 "uuid": "47dcd822-2517-4412-a8a8-07097b196414", 00:12:38.443 "assigned_rate_limits": { 00:12:38.443 "rw_ios_per_sec": 0, 00:12:38.443 "rw_mbytes_per_sec": 0, 00:12:38.443 "r_mbytes_per_sec": 0, 00:12:38.443 "w_mbytes_per_sec": 0 00:12:38.443 }, 00:12:38.443 "claimed": false, 00:12:38.443 "zoned": false, 00:12:38.443 "supported_io_types": { 00:12:38.443 "read": true, 00:12:38.443 "write": true, 00:12:38.443 "unmap": true, 00:12:38.443 "flush": true, 00:12:38.443 "reset": true, 00:12:38.443 "nvme_admin": false, 00:12:38.443 "nvme_io": false, 00:12:38.443 "nvme_io_md": false, 00:12:38.443 "write_zeroes": true, 00:12:38.443 "zcopy": true, 00:12:38.443 "get_zone_info": false, 00:12:38.443 "zone_management": false, 00:12:38.443 "zone_append": false, 00:12:38.443 "compare": false, 00:12:38.443 "compare_and_write": false, 00:12:38.443 "abort": true, 00:12:38.443 "seek_hole": false, 00:12:38.443 "seek_data": false, 00:12:38.443 "copy": true, 00:12:38.443 "nvme_iov_md": false 00:12:38.443 }, 00:12:38.443 "memory_domains": [ 00:12:38.443 { 00:12:38.443 "dma_device_id": "system", 00:12:38.443 "dma_device_type": 1 00:12:38.443 }, 00:12:38.443 { 00:12:38.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.443 "dma_device_type": 2 00:12:38.443 } 00:12:38.443 ], 00:12:38.443 "driver_specific": {} 00:12:38.443 } 00:12:38.443 ] 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.443 BaseBdev3 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.443 [ 00:12:38.443 { 00:12:38.443 "name": "BaseBdev3", 00:12:38.443 "aliases": [ 00:12:38.443 "3c662f88-ee94-4f02-8c5f-eb6691075e02" 00:12:38.443 ], 00:12:38.443 "product_name": "Malloc disk", 00:12:38.443 "block_size": 512, 00:12:38.443 "num_blocks": 65536, 00:12:38.443 "uuid": "3c662f88-ee94-4f02-8c5f-eb6691075e02", 00:12:38.443 "assigned_rate_limits": { 00:12:38.443 "rw_ios_per_sec": 0, 00:12:38.443 "rw_mbytes_per_sec": 0, 00:12:38.443 "r_mbytes_per_sec": 0, 00:12:38.443 "w_mbytes_per_sec": 0 00:12:38.443 }, 00:12:38.443 "claimed": false, 00:12:38.443 "zoned": false, 00:12:38.443 "supported_io_types": { 00:12:38.443 "read": true, 00:12:38.443 "write": true, 00:12:38.443 "unmap": true, 00:12:38.443 "flush": true, 00:12:38.443 "reset": true, 00:12:38.443 "nvme_admin": false, 00:12:38.443 "nvme_io": false, 00:12:38.443 "nvme_io_md": false, 00:12:38.443 "write_zeroes": true, 00:12:38.443 "zcopy": true, 00:12:38.443 "get_zone_info": false, 00:12:38.443 "zone_management": false, 00:12:38.443 "zone_append": false, 00:12:38.443 "compare": false, 00:12:38.443 "compare_and_write": false, 00:12:38.443 "abort": true, 00:12:38.443 "seek_hole": false, 00:12:38.443 "seek_data": false, 00:12:38.443 "copy": true, 00:12:38.443 "nvme_iov_md": false 00:12:38.443 }, 00:12:38.443 "memory_domains": [ 00:12:38.443 { 00:12:38.443 "dma_device_id": "system", 00:12:38.443 "dma_device_type": 1 00:12:38.443 }, 00:12:38.443 { 00:12:38.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.443 "dma_device_type": 2 00:12:38.443 } 00:12:38.443 ], 00:12:38.443 "driver_specific": {} 00:12:38.443 } 00:12:38.443 ] 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.443 BaseBdev4 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.443 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.443 [ 00:12:38.443 { 00:12:38.443 "name": "BaseBdev4", 00:12:38.443 "aliases": [ 00:12:38.443 "909462b8-38dd-4d22-bfb7-11d5db2c6c15" 00:12:38.443 ], 00:12:38.443 "product_name": "Malloc disk", 00:12:38.443 "block_size": 512, 00:12:38.443 "num_blocks": 65536, 00:12:38.443 "uuid": "909462b8-38dd-4d22-bfb7-11d5db2c6c15", 00:12:38.443 "assigned_rate_limits": { 00:12:38.443 "rw_ios_per_sec": 0, 00:12:38.443 "rw_mbytes_per_sec": 0, 00:12:38.443 "r_mbytes_per_sec": 0, 00:12:38.443 "w_mbytes_per_sec": 0 00:12:38.443 }, 00:12:38.443 "claimed": false, 00:12:38.443 "zoned": false, 00:12:38.443 "supported_io_types": { 00:12:38.443 "read": true, 00:12:38.443 "write": true, 00:12:38.443 "unmap": true, 00:12:38.443 "flush": true, 00:12:38.443 "reset": true, 00:12:38.443 "nvme_admin": false, 00:12:38.443 "nvme_io": false, 00:12:38.443 "nvme_io_md": false, 00:12:38.443 "write_zeroes": true, 00:12:38.443 "zcopy": true, 00:12:38.443 "get_zone_info": false, 00:12:38.443 "zone_management": false, 00:12:38.443 "zone_append": false, 00:12:38.443 "compare": false, 00:12:38.443 "compare_and_write": false, 00:12:38.443 "abort": true, 00:12:38.443 "seek_hole": false, 00:12:38.443 "seek_data": false, 00:12:38.443 "copy": true, 00:12:38.443 "nvme_iov_md": false 00:12:38.443 }, 00:12:38.443 "memory_domains": [ 00:12:38.443 { 00:12:38.443 "dma_device_id": "system", 00:12:38.443 "dma_device_type": 1 00:12:38.443 }, 00:12:38.443 { 00:12:38.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.443 "dma_device_type": 2 00:12:38.444 } 00:12:38.444 ], 00:12:38.444 "driver_specific": {} 00:12:38.444 } 00:12:38.444 ] 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 [2024-09-30 12:29:50.292433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:38.444 [2024-09-30 12:29:50.292527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:38.444 [2024-09-30 12:29:50.292568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.444 [2024-09-30 12:29:50.294658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.444 [2024-09-30 12:29:50.294755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.444 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.703 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.703 "name": "Existed_Raid", 00:12:38.703 "uuid": "4e7a9917-6357-40c6-a840-3e1b6897319b", 00:12:38.703 "strip_size_kb": 0, 00:12:38.703 "state": "configuring", 00:12:38.703 "raid_level": "raid1", 00:12:38.703 "superblock": true, 00:12:38.703 "num_base_bdevs": 4, 00:12:38.703 "num_base_bdevs_discovered": 3, 00:12:38.703 "num_base_bdevs_operational": 4, 00:12:38.703 "base_bdevs_list": [ 00:12:38.703 { 00:12:38.703 "name": "BaseBdev1", 00:12:38.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.703 "is_configured": false, 00:12:38.703 "data_offset": 0, 00:12:38.703 "data_size": 0 00:12:38.703 }, 00:12:38.703 { 00:12:38.703 "name": "BaseBdev2", 00:12:38.703 "uuid": "47dcd822-2517-4412-a8a8-07097b196414", 00:12:38.703 "is_configured": true, 00:12:38.703 "data_offset": 2048, 00:12:38.703 "data_size": 63488 00:12:38.703 }, 00:12:38.703 { 00:12:38.703 "name": "BaseBdev3", 00:12:38.703 "uuid": "3c662f88-ee94-4f02-8c5f-eb6691075e02", 00:12:38.703 "is_configured": true, 00:12:38.703 "data_offset": 2048, 00:12:38.703 "data_size": 63488 00:12:38.703 }, 00:12:38.703 { 00:12:38.703 "name": "BaseBdev4", 00:12:38.703 "uuid": "909462b8-38dd-4d22-bfb7-11d5db2c6c15", 00:12:38.703 "is_configured": true, 00:12:38.703 "data_offset": 2048, 00:12:38.703 "data_size": 63488 00:12:38.703 } 00:12:38.703 ] 00:12:38.704 }' 00:12:38.704 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.704 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.963 [2024-09-30 12:29:50.711706] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.963 "name": "Existed_Raid", 00:12:38.963 "uuid": "4e7a9917-6357-40c6-a840-3e1b6897319b", 00:12:38.963 "strip_size_kb": 0, 00:12:38.963 "state": "configuring", 00:12:38.963 "raid_level": "raid1", 00:12:38.963 "superblock": true, 00:12:38.963 "num_base_bdevs": 4, 00:12:38.963 "num_base_bdevs_discovered": 2, 00:12:38.963 "num_base_bdevs_operational": 4, 00:12:38.963 "base_bdevs_list": [ 00:12:38.963 { 00:12:38.963 "name": "BaseBdev1", 00:12:38.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.963 "is_configured": false, 00:12:38.963 "data_offset": 0, 00:12:38.963 "data_size": 0 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "name": null, 00:12:38.963 "uuid": "47dcd822-2517-4412-a8a8-07097b196414", 00:12:38.963 "is_configured": false, 00:12:38.963 "data_offset": 0, 00:12:38.963 "data_size": 63488 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "name": "BaseBdev3", 00:12:38.963 "uuid": "3c662f88-ee94-4f02-8c5f-eb6691075e02", 00:12:38.963 "is_configured": true, 00:12:38.963 "data_offset": 2048, 00:12:38.963 "data_size": 63488 00:12:38.963 }, 00:12:38.963 { 00:12:38.963 "name": "BaseBdev4", 00:12:38.963 "uuid": "909462b8-38dd-4d22-bfb7-11d5db2c6c15", 00:12:38.963 "is_configured": true, 00:12:38.963 "data_offset": 2048, 00:12:38.963 "data_size": 63488 00:12:38.963 } 00:12:38.963 ] 00:12:38.963 }' 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.963 12:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.531 [2024-09-30 12:29:51.244826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.531 BaseBdev1 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:39.531 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.532 [ 00:12:39.532 { 00:12:39.532 "name": "BaseBdev1", 00:12:39.532 "aliases": [ 00:12:39.532 "fa49f912-ee9f-44f1-b1b3-7f4313975072" 00:12:39.532 ], 00:12:39.532 "product_name": "Malloc disk", 00:12:39.532 "block_size": 512, 00:12:39.532 "num_blocks": 65536, 00:12:39.532 "uuid": "fa49f912-ee9f-44f1-b1b3-7f4313975072", 00:12:39.532 "assigned_rate_limits": { 00:12:39.532 "rw_ios_per_sec": 0, 00:12:39.532 "rw_mbytes_per_sec": 0, 00:12:39.532 "r_mbytes_per_sec": 0, 00:12:39.532 "w_mbytes_per_sec": 0 00:12:39.532 }, 00:12:39.532 "claimed": true, 00:12:39.532 "claim_type": "exclusive_write", 00:12:39.532 "zoned": false, 00:12:39.532 "supported_io_types": { 00:12:39.532 "read": true, 00:12:39.532 "write": true, 00:12:39.532 "unmap": true, 00:12:39.532 "flush": true, 00:12:39.532 "reset": true, 00:12:39.532 "nvme_admin": false, 00:12:39.532 "nvme_io": false, 00:12:39.532 "nvme_io_md": false, 00:12:39.532 "write_zeroes": true, 00:12:39.532 "zcopy": true, 00:12:39.532 "get_zone_info": false, 00:12:39.532 "zone_management": false, 00:12:39.532 "zone_append": false, 00:12:39.532 "compare": false, 00:12:39.532 "compare_and_write": false, 00:12:39.532 "abort": true, 00:12:39.532 "seek_hole": false, 00:12:39.532 "seek_data": false, 00:12:39.532 "copy": true, 00:12:39.532 "nvme_iov_md": false 00:12:39.532 }, 00:12:39.532 "memory_domains": [ 00:12:39.532 { 00:12:39.532 "dma_device_id": "system", 00:12:39.532 "dma_device_type": 1 00:12:39.532 }, 00:12:39.532 { 00:12:39.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.532 "dma_device_type": 2 00:12:39.532 } 00:12:39.532 ], 00:12:39.532 "driver_specific": {} 00:12:39.532 } 00:12:39.532 ] 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.532 "name": "Existed_Raid", 00:12:39.532 "uuid": "4e7a9917-6357-40c6-a840-3e1b6897319b", 00:12:39.532 "strip_size_kb": 0, 00:12:39.532 "state": "configuring", 00:12:39.532 "raid_level": "raid1", 00:12:39.532 "superblock": true, 00:12:39.532 "num_base_bdevs": 4, 00:12:39.532 "num_base_bdevs_discovered": 3, 00:12:39.532 "num_base_bdevs_operational": 4, 00:12:39.532 "base_bdevs_list": [ 00:12:39.532 { 00:12:39.532 "name": "BaseBdev1", 00:12:39.532 "uuid": "fa49f912-ee9f-44f1-b1b3-7f4313975072", 00:12:39.532 "is_configured": true, 00:12:39.532 "data_offset": 2048, 00:12:39.532 "data_size": 63488 00:12:39.532 }, 00:12:39.532 { 00:12:39.532 "name": null, 00:12:39.532 "uuid": "47dcd822-2517-4412-a8a8-07097b196414", 00:12:39.532 "is_configured": false, 00:12:39.532 "data_offset": 0, 00:12:39.532 "data_size": 63488 00:12:39.532 }, 00:12:39.532 { 00:12:39.532 "name": "BaseBdev3", 00:12:39.532 "uuid": "3c662f88-ee94-4f02-8c5f-eb6691075e02", 00:12:39.532 "is_configured": true, 00:12:39.532 "data_offset": 2048, 00:12:39.532 "data_size": 63488 00:12:39.532 }, 00:12:39.532 { 00:12:39.532 "name": "BaseBdev4", 00:12:39.532 "uuid": "909462b8-38dd-4d22-bfb7-11d5db2c6c15", 00:12:39.532 "is_configured": true, 00:12:39.532 "data_offset": 2048, 00:12:39.532 "data_size": 63488 00:12:39.532 } 00:12:39.532 ] 00:12:39.532 }' 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.532 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.101 [2024-09-30 12:29:51.756052] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.101 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.102 "name": "Existed_Raid", 00:12:40.102 "uuid": "4e7a9917-6357-40c6-a840-3e1b6897319b", 00:12:40.102 "strip_size_kb": 0, 00:12:40.102 "state": "configuring", 00:12:40.102 "raid_level": "raid1", 00:12:40.102 "superblock": true, 00:12:40.102 "num_base_bdevs": 4, 00:12:40.102 "num_base_bdevs_discovered": 2, 00:12:40.102 "num_base_bdevs_operational": 4, 00:12:40.102 "base_bdevs_list": [ 00:12:40.102 { 00:12:40.102 "name": "BaseBdev1", 00:12:40.102 "uuid": "fa49f912-ee9f-44f1-b1b3-7f4313975072", 00:12:40.102 "is_configured": true, 00:12:40.102 "data_offset": 2048, 00:12:40.102 "data_size": 63488 00:12:40.102 }, 00:12:40.102 { 00:12:40.102 "name": null, 00:12:40.102 "uuid": "47dcd822-2517-4412-a8a8-07097b196414", 00:12:40.102 "is_configured": false, 00:12:40.102 "data_offset": 0, 00:12:40.102 "data_size": 63488 00:12:40.102 }, 00:12:40.102 { 00:12:40.102 "name": null, 00:12:40.102 "uuid": "3c662f88-ee94-4f02-8c5f-eb6691075e02", 00:12:40.102 "is_configured": false, 00:12:40.102 "data_offset": 0, 00:12:40.102 "data_size": 63488 00:12:40.102 }, 00:12:40.102 { 00:12:40.102 "name": "BaseBdev4", 00:12:40.102 "uuid": "909462b8-38dd-4d22-bfb7-11d5db2c6c15", 00:12:40.102 "is_configured": true, 00:12:40.102 "data_offset": 2048, 00:12:40.102 "data_size": 63488 00:12:40.102 } 00:12:40.102 ] 00:12:40.102 }' 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.102 12:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.361 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.361 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:40.361 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.361 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.361 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.362 [2024-09-30 12:29:52.227518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.362 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.621 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.621 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.621 "name": "Existed_Raid", 00:12:40.621 "uuid": "4e7a9917-6357-40c6-a840-3e1b6897319b", 00:12:40.621 "strip_size_kb": 0, 00:12:40.621 "state": "configuring", 00:12:40.621 "raid_level": "raid1", 00:12:40.621 "superblock": true, 00:12:40.621 "num_base_bdevs": 4, 00:12:40.621 "num_base_bdevs_discovered": 3, 00:12:40.621 "num_base_bdevs_operational": 4, 00:12:40.621 "base_bdevs_list": [ 00:12:40.621 { 00:12:40.621 "name": "BaseBdev1", 00:12:40.621 "uuid": "fa49f912-ee9f-44f1-b1b3-7f4313975072", 00:12:40.621 "is_configured": true, 00:12:40.621 "data_offset": 2048, 00:12:40.621 "data_size": 63488 00:12:40.621 }, 00:12:40.621 { 00:12:40.621 "name": null, 00:12:40.621 "uuid": "47dcd822-2517-4412-a8a8-07097b196414", 00:12:40.621 "is_configured": false, 00:12:40.621 "data_offset": 0, 00:12:40.621 "data_size": 63488 00:12:40.621 }, 00:12:40.621 { 00:12:40.621 "name": "BaseBdev3", 00:12:40.621 "uuid": "3c662f88-ee94-4f02-8c5f-eb6691075e02", 00:12:40.621 "is_configured": true, 00:12:40.621 "data_offset": 2048, 00:12:40.621 "data_size": 63488 00:12:40.621 }, 00:12:40.621 { 00:12:40.621 "name": "BaseBdev4", 00:12:40.621 "uuid": "909462b8-38dd-4d22-bfb7-11d5db2c6c15", 00:12:40.621 "is_configured": true, 00:12:40.621 "data_offset": 2048, 00:12:40.621 "data_size": 63488 00:12:40.621 } 00:12:40.621 ] 00:12:40.621 }' 00:12:40.621 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.621 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.880 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:40.880 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.880 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.880 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.880 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.880 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:40.881 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:40.881 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.881 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.881 [2024-09-30 12:29:52.678935] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.140 "name": "Existed_Raid", 00:12:41.140 "uuid": "4e7a9917-6357-40c6-a840-3e1b6897319b", 00:12:41.140 "strip_size_kb": 0, 00:12:41.140 "state": "configuring", 00:12:41.140 "raid_level": "raid1", 00:12:41.140 "superblock": true, 00:12:41.140 "num_base_bdevs": 4, 00:12:41.140 "num_base_bdevs_discovered": 2, 00:12:41.140 "num_base_bdevs_operational": 4, 00:12:41.140 "base_bdevs_list": [ 00:12:41.140 { 00:12:41.140 "name": null, 00:12:41.140 "uuid": "fa49f912-ee9f-44f1-b1b3-7f4313975072", 00:12:41.140 "is_configured": false, 00:12:41.140 "data_offset": 0, 00:12:41.140 "data_size": 63488 00:12:41.140 }, 00:12:41.140 { 00:12:41.140 "name": null, 00:12:41.140 "uuid": "47dcd822-2517-4412-a8a8-07097b196414", 00:12:41.140 "is_configured": false, 00:12:41.140 "data_offset": 0, 00:12:41.140 "data_size": 63488 00:12:41.140 }, 00:12:41.140 { 00:12:41.140 "name": "BaseBdev3", 00:12:41.140 "uuid": "3c662f88-ee94-4f02-8c5f-eb6691075e02", 00:12:41.140 "is_configured": true, 00:12:41.140 "data_offset": 2048, 00:12:41.140 "data_size": 63488 00:12:41.140 }, 00:12:41.140 { 00:12:41.140 "name": "BaseBdev4", 00:12:41.140 "uuid": "909462b8-38dd-4d22-bfb7-11d5db2c6c15", 00:12:41.140 "is_configured": true, 00:12:41.140 "data_offset": 2048, 00:12:41.140 "data_size": 63488 00:12:41.140 } 00:12:41.140 ] 00:12:41.140 }' 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.140 12:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.400 [2024-09-30 12:29:53.228694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.400 "name": "Existed_Raid", 00:12:41.400 "uuid": "4e7a9917-6357-40c6-a840-3e1b6897319b", 00:12:41.400 "strip_size_kb": 0, 00:12:41.400 "state": "configuring", 00:12:41.400 "raid_level": "raid1", 00:12:41.400 "superblock": true, 00:12:41.400 "num_base_bdevs": 4, 00:12:41.400 "num_base_bdevs_discovered": 3, 00:12:41.400 "num_base_bdevs_operational": 4, 00:12:41.400 "base_bdevs_list": [ 00:12:41.400 { 00:12:41.400 "name": null, 00:12:41.400 "uuid": "fa49f912-ee9f-44f1-b1b3-7f4313975072", 00:12:41.400 "is_configured": false, 00:12:41.400 "data_offset": 0, 00:12:41.400 "data_size": 63488 00:12:41.400 }, 00:12:41.400 { 00:12:41.400 "name": "BaseBdev2", 00:12:41.400 "uuid": "47dcd822-2517-4412-a8a8-07097b196414", 00:12:41.400 "is_configured": true, 00:12:41.400 "data_offset": 2048, 00:12:41.400 "data_size": 63488 00:12:41.400 }, 00:12:41.400 { 00:12:41.400 "name": "BaseBdev3", 00:12:41.400 "uuid": "3c662f88-ee94-4f02-8c5f-eb6691075e02", 00:12:41.400 "is_configured": true, 00:12:41.400 "data_offset": 2048, 00:12:41.400 "data_size": 63488 00:12:41.400 }, 00:12:41.400 { 00:12:41.400 "name": "BaseBdev4", 00:12:41.400 "uuid": "909462b8-38dd-4d22-bfb7-11d5db2c6c15", 00:12:41.400 "is_configured": true, 00:12:41.400 "data_offset": 2048, 00:12:41.400 "data_size": 63488 00:12:41.400 } 00:12:41.400 ] 00:12:41.400 }' 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.400 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fa49f912-ee9f-44f1-b1b3-7f4313975072 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.969 [2024-09-30 12:29:53.811055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:41.969 [2024-09-30 12:29:53.811419] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:41.969 [2024-09-30 12:29:53.811481] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:41.969 [2024-09-30 12:29:53.811811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:41.969 NewBaseBdev 00:12:41.969 [2024-09-30 12:29:53.812017] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:41.969 [2024-09-30 12:29:53.812029] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:41.969 [2024-09-30 12:29:53.812184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.969 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.969 [ 00:12:41.969 { 00:12:41.969 "name": "NewBaseBdev", 00:12:41.969 "aliases": [ 00:12:41.969 "fa49f912-ee9f-44f1-b1b3-7f4313975072" 00:12:41.969 ], 00:12:41.969 "product_name": "Malloc disk", 00:12:41.969 "block_size": 512, 00:12:41.969 "num_blocks": 65536, 00:12:41.969 "uuid": "fa49f912-ee9f-44f1-b1b3-7f4313975072", 00:12:41.969 "assigned_rate_limits": { 00:12:41.969 "rw_ios_per_sec": 0, 00:12:41.970 "rw_mbytes_per_sec": 0, 00:12:41.970 "r_mbytes_per_sec": 0, 00:12:41.970 "w_mbytes_per_sec": 0 00:12:41.970 }, 00:12:41.970 "claimed": true, 00:12:41.970 "claim_type": "exclusive_write", 00:12:41.970 "zoned": false, 00:12:41.970 "supported_io_types": { 00:12:41.970 "read": true, 00:12:41.970 "write": true, 00:12:41.970 "unmap": true, 00:12:41.970 "flush": true, 00:12:41.970 "reset": true, 00:12:41.970 "nvme_admin": false, 00:12:41.970 "nvme_io": false, 00:12:41.970 "nvme_io_md": false, 00:12:41.970 "write_zeroes": true, 00:12:41.970 "zcopy": true, 00:12:41.970 "get_zone_info": false, 00:12:41.970 "zone_management": false, 00:12:41.970 "zone_append": false, 00:12:41.970 "compare": false, 00:12:41.970 "compare_and_write": false, 00:12:41.970 "abort": true, 00:12:41.970 "seek_hole": false, 00:12:41.970 "seek_data": false, 00:12:41.970 "copy": true, 00:12:41.970 "nvme_iov_md": false 00:12:41.970 }, 00:12:41.970 "memory_domains": [ 00:12:41.970 { 00:12:41.970 "dma_device_id": "system", 00:12:41.970 "dma_device_type": 1 00:12:41.970 }, 00:12:41.970 { 00:12:41.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.970 "dma_device_type": 2 00:12:41.970 } 00:12:41.970 ], 00:12:41.970 "driver_specific": {} 00:12:41.970 } 00:12:41.970 ] 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.970 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.229 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.229 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.229 "name": "Existed_Raid", 00:12:42.229 "uuid": "4e7a9917-6357-40c6-a840-3e1b6897319b", 00:12:42.229 "strip_size_kb": 0, 00:12:42.229 "state": "online", 00:12:42.229 "raid_level": "raid1", 00:12:42.229 "superblock": true, 00:12:42.229 "num_base_bdevs": 4, 00:12:42.229 "num_base_bdevs_discovered": 4, 00:12:42.229 "num_base_bdevs_operational": 4, 00:12:42.229 "base_bdevs_list": [ 00:12:42.229 { 00:12:42.229 "name": "NewBaseBdev", 00:12:42.229 "uuid": "fa49f912-ee9f-44f1-b1b3-7f4313975072", 00:12:42.229 "is_configured": true, 00:12:42.229 "data_offset": 2048, 00:12:42.229 "data_size": 63488 00:12:42.229 }, 00:12:42.229 { 00:12:42.229 "name": "BaseBdev2", 00:12:42.229 "uuid": "47dcd822-2517-4412-a8a8-07097b196414", 00:12:42.229 "is_configured": true, 00:12:42.229 "data_offset": 2048, 00:12:42.229 "data_size": 63488 00:12:42.229 }, 00:12:42.229 { 00:12:42.229 "name": "BaseBdev3", 00:12:42.229 "uuid": "3c662f88-ee94-4f02-8c5f-eb6691075e02", 00:12:42.229 "is_configured": true, 00:12:42.229 "data_offset": 2048, 00:12:42.229 "data_size": 63488 00:12:42.229 }, 00:12:42.229 { 00:12:42.229 "name": "BaseBdev4", 00:12:42.229 "uuid": "909462b8-38dd-4d22-bfb7-11d5db2c6c15", 00:12:42.229 "is_configured": true, 00:12:42.229 "data_offset": 2048, 00:12:42.229 "data_size": 63488 00:12:42.229 } 00:12:42.229 ] 00:12:42.229 }' 00:12:42.229 12:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.229 12:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.489 [2024-09-30 12:29:54.282602] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.489 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.489 "name": "Existed_Raid", 00:12:42.489 "aliases": [ 00:12:42.489 "4e7a9917-6357-40c6-a840-3e1b6897319b" 00:12:42.489 ], 00:12:42.489 "product_name": "Raid Volume", 00:12:42.489 "block_size": 512, 00:12:42.489 "num_blocks": 63488, 00:12:42.489 "uuid": "4e7a9917-6357-40c6-a840-3e1b6897319b", 00:12:42.489 "assigned_rate_limits": { 00:12:42.489 "rw_ios_per_sec": 0, 00:12:42.489 "rw_mbytes_per_sec": 0, 00:12:42.489 "r_mbytes_per_sec": 0, 00:12:42.489 "w_mbytes_per_sec": 0 00:12:42.489 }, 00:12:42.489 "claimed": false, 00:12:42.489 "zoned": false, 00:12:42.489 "supported_io_types": { 00:12:42.489 "read": true, 00:12:42.489 "write": true, 00:12:42.489 "unmap": false, 00:12:42.489 "flush": false, 00:12:42.489 "reset": true, 00:12:42.489 "nvme_admin": false, 00:12:42.489 "nvme_io": false, 00:12:42.489 "nvme_io_md": false, 00:12:42.489 "write_zeroes": true, 00:12:42.489 "zcopy": false, 00:12:42.489 "get_zone_info": false, 00:12:42.489 "zone_management": false, 00:12:42.489 "zone_append": false, 00:12:42.489 "compare": false, 00:12:42.489 "compare_and_write": false, 00:12:42.489 "abort": false, 00:12:42.489 "seek_hole": false, 00:12:42.489 "seek_data": false, 00:12:42.489 "copy": false, 00:12:42.489 "nvme_iov_md": false 00:12:42.489 }, 00:12:42.489 "memory_domains": [ 00:12:42.489 { 00:12:42.489 "dma_device_id": "system", 00:12:42.489 "dma_device_type": 1 00:12:42.489 }, 00:12:42.489 { 00:12:42.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.489 "dma_device_type": 2 00:12:42.489 }, 00:12:42.489 { 00:12:42.489 "dma_device_id": "system", 00:12:42.489 "dma_device_type": 1 00:12:42.489 }, 00:12:42.489 { 00:12:42.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.489 "dma_device_type": 2 00:12:42.489 }, 00:12:42.489 { 00:12:42.489 "dma_device_id": "system", 00:12:42.489 "dma_device_type": 1 00:12:42.489 }, 00:12:42.489 { 00:12:42.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.489 "dma_device_type": 2 00:12:42.489 }, 00:12:42.489 { 00:12:42.489 "dma_device_id": "system", 00:12:42.489 "dma_device_type": 1 00:12:42.489 }, 00:12:42.489 { 00:12:42.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.489 "dma_device_type": 2 00:12:42.489 } 00:12:42.489 ], 00:12:42.489 "driver_specific": { 00:12:42.489 "raid": { 00:12:42.489 "uuid": "4e7a9917-6357-40c6-a840-3e1b6897319b", 00:12:42.489 "strip_size_kb": 0, 00:12:42.489 "state": "online", 00:12:42.489 "raid_level": "raid1", 00:12:42.489 "superblock": true, 00:12:42.489 "num_base_bdevs": 4, 00:12:42.489 "num_base_bdevs_discovered": 4, 00:12:42.489 "num_base_bdevs_operational": 4, 00:12:42.489 "base_bdevs_list": [ 00:12:42.489 { 00:12:42.489 "name": "NewBaseBdev", 00:12:42.489 "uuid": "fa49f912-ee9f-44f1-b1b3-7f4313975072", 00:12:42.489 "is_configured": true, 00:12:42.489 "data_offset": 2048, 00:12:42.489 "data_size": 63488 00:12:42.489 }, 00:12:42.489 { 00:12:42.489 "name": "BaseBdev2", 00:12:42.489 "uuid": "47dcd822-2517-4412-a8a8-07097b196414", 00:12:42.489 "is_configured": true, 00:12:42.489 "data_offset": 2048, 00:12:42.489 "data_size": 63488 00:12:42.489 }, 00:12:42.489 { 00:12:42.489 "name": "BaseBdev3", 00:12:42.489 "uuid": "3c662f88-ee94-4f02-8c5f-eb6691075e02", 00:12:42.490 "is_configured": true, 00:12:42.490 "data_offset": 2048, 00:12:42.490 "data_size": 63488 00:12:42.490 }, 00:12:42.490 { 00:12:42.490 "name": "BaseBdev4", 00:12:42.490 "uuid": "909462b8-38dd-4d22-bfb7-11d5db2c6c15", 00:12:42.490 "is_configured": true, 00:12:42.490 "data_offset": 2048, 00:12:42.490 "data_size": 63488 00:12:42.490 } 00:12:42.490 ] 00:12:42.490 } 00:12:42.490 } 00:12:42.490 }' 00:12:42.490 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.490 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:42.490 BaseBdev2 00:12:42.490 BaseBdev3 00:12:42.490 BaseBdev4' 00:12:42.490 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.749 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.749 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.749 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:42.749 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.749 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.749 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.749 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.749 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.749 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.750 [2024-09-30 12:29:54.633676] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.750 [2024-09-30 12:29:54.633777] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.750 [2024-09-30 12:29:54.633890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.750 [2024-09-30 12:29:54.634202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.750 [2024-09-30 12:29:54.634216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73729 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73729 ']' 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73729 00:12:42.750 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:43.009 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.009 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73729 00:12:43.009 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:43.010 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:43.010 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73729' 00:12:43.010 killing process with pid 73729 00:12:43.010 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73729 00:12:43.010 [2024-09-30 12:29:54.682317] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.010 12:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73729 00:12:43.269 [2024-09-30 12:29:55.106432] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.649 12:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:44.649 ************************************ 00:12:44.649 END TEST raid_state_function_test_sb 00:12:44.649 ************************************ 00:12:44.649 00:12:44.649 real 0m11.784s 00:12:44.649 user 0m18.380s 00:12:44.649 sys 0m2.176s 00:12:44.649 12:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.649 12:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.649 12:29:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:44.649 12:29:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:44.649 12:29:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.649 12:29:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.649 ************************************ 00:12:44.649 START TEST raid_superblock_test 00:12:44.649 ************************************ 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74405 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:44.649 12:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74405 00:12:44.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.909 12:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74405 ']' 00:12:44.909 12:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.909 12:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.909 12:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.909 12:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.909 12:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.909 [2024-09-30 12:29:56.625500] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:44.909 [2024-09-30 12:29:56.625610] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74405 ] 00:12:44.909 [2024-09-30 12:29:56.787774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.169 [2024-09-30 12:29:57.043877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.428 [2024-09-30 12:29:57.275440] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.428 [2024-09-30 12:29:57.275587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.688 malloc1 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.688 [2024-09-30 12:29:57.491260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:45.688 [2024-09-30 12:29:57.491413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.688 [2024-09-30 12:29:57.491446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:45.688 [2024-09-30 12:29:57.491459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.688 [2024-09-30 12:29:57.493863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.688 [2024-09-30 12:29:57.493899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:45.688 pt1 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.688 malloc2 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.688 [2024-09-30 12:29:57.578180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:45.688 [2024-09-30 12:29:57.578334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.688 [2024-09-30 12:29:57.578378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:45.688 [2024-09-30 12:29:57.578408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.688 [2024-09-30 12:29:57.580873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.688 [2024-09-30 12:29:57.580967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:45.688 pt2 00:12:45.688 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.948 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:45.948 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:45.948 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:45.948 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:45.948 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:45.948 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.949 malloc3 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.949 [2024-09-30 12:29:57.644073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:45.949 [2024-09-30 12:29:57.644198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.949 [2024-09-30 12:29:57.644237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:45.949 [2024-09-30 12:29:57.644265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.949 [2024-09-30 12:29:57.646617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.949 [2024-09-30 12:29:57.646689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:45.949 pt3 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.949 malloc4 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.949 [2024-09-30 12:29:57.708890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:45.949 [2024-09-30 12:29:57.708949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.949 [2024-09-30 12:29:57.708968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:45.949 [2024-09-30 12:29:57.708977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.949 [2024-09-30 12:29:57.711313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.949 [2024-09-30 12:29:57.711438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:45.949 pt4 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.949 [2024-09-30 12:29:57.720937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:45.949 [2024-09-30 12:29:57.723000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:45.949 [2024-09-30 12:29:57.723064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:45.949 [2024-09-30 12:29:57.723105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:45.949 [2024-09-30 12:29:57.723298] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:45.949 [2024-09-30 12:29:57.723320] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:45.949 [2024-09-30 12:29:57.723613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:45.949 [2024-09-30 12:29:57.723807] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:45.949 [2024-09-30 12:29:57.723822] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:45.949 [2024-09-30 12:29:57.723980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.949 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.949 "name": "raid_bdev1", 00:12:45.949 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:45.949 "strip_size_kb": 0, 00:12:45.949 "state": "online", 00:12:45.949 "raid_level": "raid1", 00:12:45.949 "superblock": true, 00:12:45.949 "num_base_bdevs": 4, 00:12:45.949 "num_base_bdevs_discovered": 4, 00:12:45.949 "num_base_bdevs_operational": 4, 00:12:45.949 "base_bdevs_list": [ 00:12:45.949 { 00:12:45.949 "name": "pt1", 00:12:45.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.949 "is_configured": true, 00:12:45.949 "data_offset": 2048, 00:12:45.949 "data_size": 63488 00:12:45.949 }, 00:12:45.949 { 00:12:45.949 "name": "pt2", 00:12:45.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.949 "is_configured": true, 00:12:45.949 "data_offset": 2048, 00:12:45.949 "data_size": 63488 00:12:45.949 }, 00:12:45.949 { 00:12:45.949 "name": "pt3", 00:12:45.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.949 "is_configured": true, 00:12:45.949 "data_offset": 2048, 00:12:45.949 "data_size": 63488 00:12:45.949 }, 00:12:45.949 { 00:12:45.949 "name": "pt4", 00:12:45.949 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.949 "is_configured": true, 00:12:45.949 "data_offset": 2048, 00:12:45.949 "data_size": 63488 00:12:45.949 } 00:12:45.950 ] 00:12:45.950 }' 00:12:45.950 12:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.950 12:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.519 [2024-09-30 12:29:58.172445] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.519 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:46.519 "name": "raid_bdev1", 00:12:46.519 "aliases": [ 00:12:46.519 "aa1ef880-6de3-4548-b364-751e61ddb5b5" 00:12:46.519 ], 00:12:46.519 "product_name": "Raid Volume", 00:12:46.519 "block_size": 512, 00:12:46.519 "num_blocks": 63488, 00:12:46.519 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:46.519 "assigned_rate_limits": { 00:12:46.519 "rw_ios_per_sec": 0, 00:12:46.519 "rw_mbytes_per_sec": 0, 00:12:46.519 "r_mbytes_per_sec": 0, 00:12:46.519 "w_mbytes_per_sec": 0 00:12:46.519 }, 00:12:46.519 "claimed": false, 00:12:46.519 "zoned": false, 00:12:46.519 "supported_io_types": { 00:12:46.519 "read": true, 00:12:46.519 "write": true, 00:12:46.519 "unmap": false, 00:12:46.519 "flush": false, 00:12:46.519 "reset": true, 00:12:46.519 "nvme_admin": false, 00:12:46.519 "nvme_io": false, 00:12:46.519 "nvme_io_md": false, 00:12:46.519 "write_zeroes": true, 00:12:46.519 "zcopy": false, 00:12:46.519 "get_zone_info": false, 00:12:46.519 "zone_management": false, 00:12:46.519 "zone_append": false, 00:12:46.519 "compare": false, 00:12:46.519 "compare_and_write": false, 00:12:46.519 "abort": false, 00:12:46.519 "seek_hole": false, 00:12:46.519 "seek_data": false, 00:12:46.519 "copy": false, 00:12:46.519 "nvme_iov_md": false 00:12:46.519 }, 00:12:46.519 "memory_domains": [ 00:12:46.519 { 00:12:46.519 "dma_device_id": "system", 00:12:46.519 "dma_device_type": 1 00:12:46.519 }, 00:12:46.519 { 00:12:46.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.519 "dma_device_type": 2 00:12:46.519 }, 00:12:46.519 { 00:12:46.519 "dma_device_id": "system", 00:12:46.519 "dma_device_type": 1 00:12:46.519 }, 00:12:46.519 { 00:12:46.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.519 "dma_device_type": 2 00:12:46.519 }, 00:12:46.519 { 00:12:46.519 "dma_device_id": "system", 00:12:46.519 "dma_device_type": 1 00:12:46.519 }, 00:12:46.519 { 00:12:46.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.519 "dma_device_type": 2 00:12:46.519 }, 00:12:46.519 { 00:12:46.519 "dma_device_id": "system", 00:12:46.519 "dma_device_type": 1 00:12:46.519 }, 00:12:46.519 { 00:12:46.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.519 "dma_device_type": 2 00:12:46.519 } 00:12:46.519 ], 00:12:46.519 "driver_specific": { 00:12:46.519 "raid": { 00:12:46.519 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:46.519 "strip_size_kb": 0, 00:12:46.519 "state": "online", 00:12:46.519 "raid_level": "raid1", 00:12:46.519 "superblock": true, 00:12:46.519 "num_base_bdevs": 4, 00:12:46.519 "num_base_bdevs_discovered": 4, 00:12:46.519 "num_base_bdevs_operational": 4, 00:12:46.519 "base_bdevs_list": [ 00:12:46.519 { 00:12:46.519 "name": "pt1", 00:12:46.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:46.519 "is_configured": true, 00:12:46.519 "data_offset": 2048, 00:12:46.519 "data_size": 63488 00:12:46.519 }, 00:12:46.519 { 00:12:46.519 "name": "pt2", 00:12:46.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:46.520 "is_configured": true, 00:12:46.520 "data_offset": 2048, 00:12:46.520 "data_size": 63488 00:12:46.520 }, 00:12:46.520 { 00:12:46.520 "name": "pt3", 00:12:46.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:46.520 "is_configured": true, 00:12:46.520 "data_offset": 2048, 00:12:46.520 "data_size": 63488 00:12:46.520 }, 00:12:46.520 { 00:12:46.520 "name": "pt4", 00:12:46.520 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:46.520 "is_configured": true, 00:12:46.520 "data_offset": 2048, 00:12:46.520 "data_size": 63488 00:12:46.520 } 00:12:46.520 ] 00:12:46.520 } 00:12:46.520 } 00:12:46.520 }' 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:46.520 pt2 00:12:46.520 pt3 00:12:46.520 pt4' 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.520 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.780 [2024-09-30 12:29:58.495813] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aa1ef880-6de3-4548-b364-751e61ddb5b5 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aa1ef880-6de3-4548-b364-751e61ddb5b5 ']' 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.780 [2024-09-30 12:29:58.539457] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.780 [2024-09-30 12:29:58.539528] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.780 [2024-09-30 12:29:58.539627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.780 [2024-09-30 12:29:58.539717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.780 [2024-09-30 12:29:58.539732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.780 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.781 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.781 [2024-09-30 12:29:58.671247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:46.781 [2024-09-30 12:29:58.673482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:46.781 [2024-09-30 12:29:58.673533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:46.781 [2024-09-30 12:29:58.673566] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:46.781 [2024-09-30 12:29:58.673625] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:46.781 [2024-09-30 12:29:58.673676] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:46.781 [2024-09-30 12:29:58.673694] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:46.781 [2024-09-30 12:29:58.673712] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:46.781 [2024-09-30 12:29:58.673725] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.781 [2024-09-30 12:29:58.673735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:47.041 request: 00:12:47.041 { 00:12:47.041 "name": "raid_bdev1", 00:12:47.041 "raid_level": "raid1", 00:12:47.041 "base_bdevs": [ 00:12:47.041 "malloc1", 00:12:47.041 "malloc2", 00:12:47.041 "malloc3", 00:12:47.041 "malloc4" 00:12:47.041 ], 00:12:47.041 "superblock": false, 00:12:47.041 "method": "bdev_raid_create", 00:12:47.041 "req_id": 1 00:12:47.041 } 00:12:47.041 Got JSON-RPC error response 00:12:47.041 response: 00:12:47.041 { 00:12:47.041 "code": -17, 00:12:47.041 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:47.041 } 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.041 [2024-09-30 12:29:58.735116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:47.041 [2024-09-30 12:29:58.735210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.041 [2024-09-30 12:29:58.735242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:47.041 [2024-09-30 12:29:58.735271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.041 [2024-09-30 12:29:58.737723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.041 [2024-09-30 12:29:58.737806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:47.041 [2024-09-30 12:29:58.737896] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:47.041 [2024-09-30 12:29:58.737983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:47.041 pt1 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.041 "name": "raid_bdev1", 00:12:47.041 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:47.041 "strip_size_kb": 0, 00:12:47.041 "state": "configuring", 00:12:47.041 "raid_level": "raid1", 00:12:47.041 "superblock": true, 00:12:47.041 "num_base_bdevs": 4, 00:12:47.041 "num_base_bdevs_discovered": 1, 00:12:47.041 "num_base_bdevs_operational": 4, 00:12:47.041 "base_bdevs_list": [ 00:12:47.041 { 00:12:47.041 "name": "pt1", 00:12:47.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.041 "is_configured": true, 00:12:47.041 "data_offset": 2048, 00:12:47.041 "data_size": 63488 00:12:47.041 }, 00:12:47.041 { 00:12:47.041 "name": null, 00:12:47.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.041 "is_configured": false, 00:12:47.041 "data_offset": 2048, 00:12:47.041 "data_size": 63488 00:12:47.041 }, 00:12:47.041 { 00:12:47.041 "name": null, 00:12:47.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.041 "is_configured": false, 00:12:47.041 "data_offset": 2048, 00:12:47.041 "data_size": 63488 00:12:47.041 }, 00:12:47.041 { 00:12:47.041 "name": null, 00:12:47.041 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.041 "is_configured": false, 00:12:47.041 "data_offset": 2048, 00:12:47.041 "data_size": 63488 00:12:47.041 } 00:12:47.041 ] 00:12:47.041 }' 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.041 12:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.306 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:47.306 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:47.306 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.306 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.306 [2024-09-30 12:29:59.182374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:47.306 [2024-09-30 12:29:59.182427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.306 [2024-09-30 12:29:59.182461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:47.306 [2024-09-30 12:29:59.182471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.306 [2024-09-30 12:29:59.182928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.306 [2024-09-30 12:29:59.182950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:47.306 [2024-09-30 12:29:59.183018] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:47.306 [2024-09-30 12:29:59.183049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.306 pt2 00:12:47.306 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.306 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:47.306 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.306 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.306 [2024-09-30 12:29:59.194371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.581 "name": "raid_bdev1", 00:12:47.581 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:47.581 "strip_size_kb": 0, 00:12:47.581 "state": "configuring", 00:12:47.581 "raid_level": "raid1", 00:12:47.581 "superblock": true, 00:12:47.581 "num_base_bdevs": 4, 00:12:47.581 "num_base_bdevs_discovered": 1, 00:12:47.581 "num_base_bdevs_operational": 4, 00:12:47.581 "base_bdevs_list": [ 00:12:47.581 { 00:12:47.581 "name": "pt1", 00:12:47.581 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.581 "is_configured": true, 00:12:47.581 "data_offset": 2048, 00:12:47.581 "data_size": 63488 00:12:47.581 }, 00:12:47.581 { 00:12:47.581 "name": null, 00:12:47.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.581 "is_configured": false, 00:12:47.581 "data_offset": 0, 00:12:47.581 "data_size": 63488 00:12:47.581 }, 00:12:47.581 { 00:12:47.581 "name": null, 00:12:47.581 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.581 "is_configured": false, 00:12:47.581 "data_offset": 2048, 00:12:47.581 "data_size": 63488 00:12:47.581 }, 00:12:47.581 { 00:12:47.581 "name": null, 00:12:47.581 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.581 "is_configured": false, 00:12:47.581 "data_offset": 2048, 00:12:47.581 "data_size": 63488 00:12:47.581 } 00:12:47.581 ] 00:12:47.581 }' 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.581 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.876 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:47.876 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:47.876 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:47.876 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.876 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.876 [2024-09-30 12:29:59.653614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:47.877 [2024-09-30 12:29:59.653697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.877 [2024-09-30 12:29:59.653725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:47.877 [2024-09-30 12:29:59.653736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.877 [2024-09-30 12:29:59.654238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.877 [2024-09-30 12:29:59.654263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:47.877 [2024-09-30 12:29:59.654352] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:47.877 [2024-09-30 12:29:59.654383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.877 pt2 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.877 [2024-09-30 12:29:59.661588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:47.877 [2024-09-30 12:29:59.661652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.877 [2024-09-30 12:29:59.661687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:47.877 [2024-09-30 12:29:59.661695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.877 [2024-09-30 12:29:59.662100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.877 [2024-09-30 12:29:59.662116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:47.877 [2024-09-30 12:29:59.662180] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:47.877 [2024-09-30 12:29:59.662196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:47.877 pt3 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.877 [2024-09-30 12:29:59.673530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:47.877 [2024-09-30 12:29:59.673573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.877 [2024-09-30 12:29:59.673605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:47.877 [2024-09-30 12:29:59.673612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.877 [2024-09-30 12:29:59.673992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.877 [2024-09-30 12:29:59.674009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:47.877 [2024-09-30 12:29:59.674063] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:47.877 [2024-09-30 12:29:59.674090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:47.877 [2024-09-30 12:29:59.674230] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:47.877 [2024-09-30 12:29:59.674238] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.877 [2024-09-30 12:29:59.674485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:47.877 [2024-09-30 12:29:59.674639] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:47.877 [2024-09-30 12:29:59.674652] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:47.877 [2024-09-30 12:29:59.674787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.877 pt4 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.877 "name": "raid_bdev1", 00:12:47.877 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:47.877 "strip_size_kb": 0, 00:12:47.877 "state": "online", 00:12:47.877 "raid_level": "raid1", 00:12:47.877 "superblock": true, 00:12:47.877 "num_base_bdevs": 4, 00:12:47.877 "num_base_bdevs_discovered": 4, 00:12:47.877 "num_base_bdevs_operational": 4, 00:12:47.877 "base_bdevs_list": [ 00:12:47.877 { 00:12:47.877 "name": "pt1", 00:12:47.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.877 "is_configured": true, 00:12:47.877 "data_offset": 2048, 00:12:47.877 "data_size": 63488 00:12:47.877 }, 00:12:47.877 { 00:12:47.877 "name": "pt2", 00:12:47.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.877 "is_configured": true, 00:12:47.877 "data_offset": 2048, 00:12:47.877 "data_size": 63488 00:12:47.877 }, 00:12:47.877 { 00:12:47.877 "name": "pt3", 00:12:47.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.877 "is_configured": true, 00:12:47.877 "data_offset": 2048, 00:12:47.877 "data_size": 63488 00:12:47.877 }, 00:12:47.877 { 00:12:47.877 "name": "pt4", 00:12:47.877 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.877 "is_configured": true, 00:12:47.877 "data_offset": 2048, 00:12:47.877 "data_size": 63488 00:12:47.877 } 00:12:47.877 ] 00:12:47.877 }' 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.877 12:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.472 [2024-09-30 12:30:00.149112] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:48.472 "name": "raid_bdev1", 00:12:48.472 "aliases": [ 00:12:48.472 "aa1ef880-6de3-4548-b364-751e61ddb5b5" 00:12:48.472 ], 00:12:48.472 "product_name": "Raid Volume", 00:12:48.472 "block_size": 512, 00:12:48.472 "num_blocks": 63488, 00:12:48.472 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:48.472 "assigned_rate_limits": { 00:12:48.472 "rw_ios_per_sec": 0, 00:12:48.472 "rw_mbytes_per_sec": 0, 00:12:48.472 "r_mbytes_per_sec": 0, 00:12:48.472 "w_mbytes_per_sec": 0 00:12:48.472 }, 00:12:48.472 "claimed": false, 00:12:48.472 "zoned": false, 00:12:48.472 "supported_io_types": { 00:12:48.472 "read": true, 00:12:48.472 "write": true, 00:12:48.472 "unmap": false, 00:12:48.472 "flush": false, 00:12:48.472 "reset": true, 00:12:48.472 "nvme_admin": false, 00:12:48.472 "nvme_io": false, 00:12:48.472 "nvme_io_md": false, 00:12:48.472 "write_zeroes": true, 00:12:48.472 "zcopy": false, 00:12:48.472 "get_zone_info": false, 00:12:48.472 "zone_management": false, 00:12:48.472 "zone_append": false, 00:12:48.472 "compare": false, 00:12:48.472 "compare_and_write": false, 00:12:48.472 "abort": false, 00:12:48.472 "seek_hole": false, 00:12:48.472 "seek_data": false, 00:12:48.472 "copy": false, 00:12:48.472 "nvme_iov_md": false 00:12:48.472 }, 00:12:48.472 "memory_domains": [ 00:12:48.472 { 00:12:48.472 "dma_device_id": "system", 00:12:48.472 "dma_device_type": 1 00:12:48.472 }, 00:12:48.472 { 00:12:48.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.472 "dma_device_type": 2 00:12:48.472 }, 00:12:48.472 { 00:12:48.472 "dma_device_id": "system", 00:12:48.472 "dma_device_type": 1 00:12:48.472 }, 00:12:48.472 { 00:12:48.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.472 "dma_device_type": 2 00:12:48.472 }, 00:12:48.472 { 00:12:48.472 "dma_device_id": "system", 00:12:48.472 "dma_device_type": 1 00:12:48.472 }, 00:12:48.472 { 00:12:48.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.472 "dma_device_type": 2 00:12:48.472 }, 00:12:48.472 { 00:12:48.472 "dma_device_id": "system", 00:12:48.472 "dma_device_type": 1 00:12:48.472 }, 00:12:48.472 { 00:12:48.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.472 "dma_device_type": 2 00:12:48.472 } 00:12:48.472 ], 00:12:48.472 "driver_specific": { 00:12:48.472 "raid": { 00:12:48.472 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:48.472 "strip_size_kb": 0, 00:12:48.472 "state": "online", 00:12:48.472 "raid_level": "raid1", 00:12:48.472 "superblock": true, 00:12:48.472 "num_base_bdevs": 4, 00:12:48.472 "num_base_bdevs_discovered": 4, 00:12:48.472 "num_base_bdevs_operational": 4, 00:12:48.472 "base_bdevs_list": [ 00:12:48.472 { 00:12:48.472 "name": "pt1", 00:12:48.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:48.472 "is_configured": true, 00:12:48.472 "data_offset": 2048, 00:12:48.472 "data_size": 63488 00:12:48.472 }, 00:12:48.472 { 00:12:48.472 "name": "pt2", 00:12:48.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.472 "is_configured": true, 00:12:48.472 "data_offset": 2048, 00:12:48.472 "data_size": 63488 00:12:48.472 }, 00:12:48.472 { 00:12:48.472 "name": "pt3", 00:12:48.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:48.472 "is_configured": true, 00:12:48.472 "data_offset": 2048, 00:12:48.472 "data_size": 63488 00:12:48.472 }, 00:12:48.472 { 00:12:48.472 "name": "pt4", 00:12:48.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:48.472 "is_configured": true, 00:12:48.472 "data_offset": 2048, 00:12:48.472 "data_size": 63488 00:12:48.472 } 00:12:48.472 ] 00:12:48.472 } 00:12:48.472 } 00:12:48.472 }' 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:48.472 pt2 00:12:48.472 pt3 00:12:48.472 pt4' 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.472 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.732 [2024-09-30 12:30:00.452507] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aa1ef880-6de3-4548-b364-751e61ddb5b5 '!=' aa1ef880-6de3-4548-b364-751e61ddb5b5 ']' 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.732 [2024-09-30 12:30:00.496196] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.732 "name": "raid_bdev1", 00:12:48.732 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:48.732 "strip_size_kb": 0, 00:12:48.732 "state": "online", 00:12:48.732 "raid_level": "raid1", 00:12:48.732 "superblock": true, 00:12:48.732 "num_base_bdevs": 4, 00:12:48.732 "num_base_bdevs_discovered": 3, 00:12:48.732 "num_base_bdevs_operational": 3, 00:12:48.732 "base_bdevs_list": [ 00:12:48.732 { 00:12:48.732 "name": null, 00:12:48.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.732 "is_configured": false, 00:12:48.732 "data_offset": 0, 00:12:48.732 "data_size": 63488 00:12:48.732 }, 00:12:48.732 { 00:12:48.732 "name": "pt2", 00:12:48.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.732 "is_configured": true, 00:12:48.732 "data_offset": 2048, 00:12:48.732 "data_size": 63488 00:12:48.732 }, 00:12:48.732 { 00:12:48.732 "name": "pt3", 00:12:48.732 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:48.732 "is_configured": true, 00:12:48.732 "data_offset": 2048, 00:12:48.732 "data_size": 63488 00:12:48.732 }, 00:12:48.732 { 00:12:48.732 "name": "pt4", 00:12:48.732 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:48.732 "is_configured": true, 00:12:48.732 "data_offset": 2048, 00:12:48.732 "data_size": 63488 00:12:48.732 } 00:12:48.732 ] 00:12:48.732 }' 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.732 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.302 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:49.302 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.302 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.302 [2024-09-30 12:30:00.951436] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:49.302 [2024-09-30 12:30:00.951535] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.302 [2024-09-30 12:30:00.951652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.302 [2024-09-30 12:30:00.951779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.302 [2024-09-30 12:30:00.951826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:49.302 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.302 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.302 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.302 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.302 12:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:49.302 12:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.302 [2024-09-30 12:30:01.051223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:49.302 [2024-09-30 12:30:01.051278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.302 [2024-09-30 12:30:01.051298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:49.302 [2024-09-30 12:30:01.051308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.302 [2024-09-30 12:30:01.053983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.302 [2024-09-30 12:30:01.054066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:49.302 [2024-09-30 12:30:01.054161] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:49.302 [2024-09-30 12:30:01.054212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:49.302 pt2 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.302 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.302 "name": "raid_bdev1", 00:12:49.302 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:49.302 "strip_size_kb": 0, 00:12:49.302 "state": "configuring", 00:12:49.302 "raid_level": "raid1", 00:12:49.302 "superblock": true, 00:12:49.302 "num_base_bdevs": 4, 00:12:49.302 "num_base_bdevs_discovered": 1, 00:12:49.302 "num_base_bdevs_operational": 3, 00:12:49.302 "base_bdevs_list": [ 00:12:49.302 { 00:12:49.302 "name": null, 00:12:49.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.302 "is_configured": false, 00:12:49.303 "data_offset": 2048, 00:12:49.303 "data_size": 63488 00:12:49.303 }, 00:12:49.303 { 00:12:49.303 "name": "pt2", 00:12:49.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:49.303 "is_configured": true, 00:12:49.303 "data_offset": 2048, 00:12:49.303 "data_size": 63488 00:12:49.303 }, 00:12:49.303 { 00:12:49.303 "name": null, 00:12:49.303 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:49.303 "is_configured": false, 00:12:49.303 "data_offset": 2048, 00:12:49.303 "data_size": 63488 00:12:49.303 }, 00:12:49.303 { 00:12:49.303 "name": null, 00:12:49.303 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:49.303 "is_configured": false, 00:12:49.303 "data_offset": 2048, 00:12:49.303 "data_size": 63488 00:12:49.303 } 00:12:49.303 ] 00:12:49.303 }' 00:12:49.303 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.303 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.871 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:49.871 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:49.871 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:49.871 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.871 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.871 [2024-09-30 12:30:01.514469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:49.871 [2024-09-30 12:30:01.514643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.871 [2024-09-30 12:30:01.514688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:49.872 [2024-09-30 12:30:01.514721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.872 [2024-09-30 12:30:01.515305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.872 [2024-09-30 12:30:01.515393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:49.872 [2024-09-30 12:30:01.515527] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:49.872 [2024-09-30 12:30:01.515593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:49.872 pt3 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.872 "name": "raid_bdev1", 00:12:49.872 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:49.872 "strip_size_kb": 0, 00:12:49.872 "state": "configuring", 00:12:49.872 "raid_level": "raid1", 00:12:49.872 "superblock": true, 00:12:49.872 "num_base_bdevs": 4, 00:12:49.872 "num_base_bdevs_discovered": 2, 00:12:49.872 "num_base_bdevs_operational": 3, 00:12:49.872 "base_bdevs_list": [ 00:12:49.872 { 00:12:49.872 "name": null, 00:12:49.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.872 "is_configured": false, 00:12:49.872 "data_offset": 2048, 00:12:49.872 "data_size": 63488 00:12:49.872 }, 00:12:49.872 { 00:12:49.872 "name": "pt2", 00:12:49.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:49.872 "is_configured": true, 00:12:49.872 "data_offset": 2048, 00:12:49.872 "data_size": 63488 00:12:49.872 }, 00:12:49.872 { 00:12:49.872 "name": "pt3", 00:12:49.872 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:49.872 "is_configured": true, 00:12:49.872 "data_offset": 2048, 00:12:49.872 "data_size": 63488 00:12:49.872 }, 00:12:49.872 { 00:12:49.872 "name": null, 00:12:49.872 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:49.872 "is_configured": false, 00:12:49.872 "data_offset": 2048, 00:12:49.872 "data_size": 63488 00:12:49.872 } 00:12:49.872 ] 00:12:49.872 }' 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.872 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.131 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.132 [2024-09-30 12:30:01.965685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:50.132 [2024-09-30 12:30:01.965765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.132 [2024-09-30 12:30:01.965791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:50.132 [2024-09-30 12:30:01.965801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.132 [2024-09-30 12:30:01.966342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.132 [2024-09-30 12:30:01.966359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:50.132 [2024-09-30 12:30:01.966446] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:50.132 [2024-09-30 12:30:01.966482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:50.132 [2024-09-30 12:30:01.966631] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:50.132 [2024-09-30 12:30:01.966640] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:50.132 [2024-09-30 12:30:01.966905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:50.132 [2024-09-30 12:30:01.967144] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:50.132 [2024-09-30 12:30:01.967160] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:50.132 [2024-09-30 12:30:01.967318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.132 pt4 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.132 12:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.132 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.132 "name": "raid_bdev1", 00:12:50.132 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:50.132 "strip_size_kb": 0, 00:12:50.132 "state": "online", 00:12:50.132 "raid_level": "raid1", 00:12:50.132 "superblock": true, 00:12:50.132 "num_base_bdevs": 4, 00:12:50.132 "num_base_bdevs_discovered": 3, 00:12:50.132 "num_base_bdevs_operational": 3, 00:12:50.132 "base_bdevs_list": [ 00:12:50.132 { 00:12:50.132 "name": null, 00:12:50.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.132 "is_configured": false, 00:12:50.132 "data_offset": 2048, 00:12:50.132 "data_size": 63488 00:12:50.132 }, 00:12:50.132 { 00:12:50.132 "name": "pt2", 00:12:50.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:50.132 "is_configured": true, 00:12:50.132 "data_offset": 2048, 00:12:50.132 "data_size": 63488 00:12:50.132 }, 00:12:50.132 { 00:12:50.132 "name": "pt3", 00:12:50.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:50.132 "is_configured": true, 00:12:50.132 "data_offset": 2048, 00:12:50.132 "data_size": 63488 00:12:50.132 }, 00:12:50.132 { 00:12:50.132 "name": "pt4", 00:12:50.132 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:50.132 "is_configured": true, 00:12:50.132 "data_offset": 2048, 00:12:50.132 "data_size": 63488 00:12:50.132 } 00:12:50.132 ] 00:12:50.132 }' 00:12:50.132 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.132 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.701 [2024-09-30 12:30:02.436916] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.701 [2024-09-30 12:30:02.437008] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.701 [2024-09-30 12:30:02.437194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.701 [2024-09-30 12:30:02.437345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.701 [2024-09-30 12:30:02.437403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.701 [2024-09-30 12:30:02.500728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:50.701 [2024-09-30 12:30:02.500843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.701 [2024-09-30 12:30:02.500883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:50.701 [2024-09-30 12:30:02.500964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.701 [2024-09-30 12:30:02.503684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.701 [2024-09-30 12:30:02.503772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:50.701 [2024-09-30 12:30:02.503931] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:50.701 [2024-09-30 12:30:02.504031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:50.701 [2024-09-30 12:30:02.504221] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:50.701 [2024-09-30 12:30:02.504283] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.701 [2024-09-30 12:30:02.504323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:50.701 [2024-09-30 12:30:02.504434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:50.701 [2024-09-30 12:30:02.504577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:50.701 pt1 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.701 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.702 "name": "raid_bdev1", 00:12:50.702 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:50.702 "strip_size_kb": 0, 00:12:50.702 "state": "configuring", 00:12:50.702 "raid_level": "raid1", 00:12:50.702 "superblock": true, 00:12:50.702 "num_base_bdevs": 4, 00:12:50.702 "num_base_bdevs_discovered": 2, 00:12:50.702 "num_base_bdevs_operational": 3, 00:12:50.702 "base_bdevs_list": [ 00:12:50.702 { 00:12:50.702 "name": null, 00:12:50.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.702 "is_configured": false, 00:12:50.702 "data_offset": 2048, 00:12:50.702 "data_size": 63488 00:12:50.702 }, 00:12:50.702 { 00:12:50.702 "name": "pt2", 00:12:50.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:50.702 "is_configured": true, 00:12:50.702 "data_offset": 2048, 00:12:50.702 "data_size": 63488 00:12:50.702 }, 00:12:50.702 { 00:12:50.702 "name": "pt3", 00:12:50.702 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:50.702 "is_configured": true, 00:12:50.702 "data_offset": 2048, 00:12:50.702 "data_size": 63488 00:12:50.702 }, 00:12:50.702 { 00:12:50.702 "name": null, 00:12:50.702 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:50.702 "is_configured": false, 00:12:50.702 "data_offset": 2048, 00:12:50.702 "data_size": 63488 00:12:50.702 } 00:12:50.702 ] 00:12:50.702 }' 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.702 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.271 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:51.271 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.271 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.271 12:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:51.271 12:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.271 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:51.271 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:51.271 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.271 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.271 [2024-09-30 12:30:03.007916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:51.271 [2024-09-30 12:30:03.007984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.271 [2024-09-30 12:30:03.008008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:51.271 [2024-09-30 12:30:03.008017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.271 [2024-09-30 12:30:03.008503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.271 [2024-09-30 12:30:03.008520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:51.271 [2024-09-30 12:30:03.008608] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:51.271 [2024-09-30 12:30:03.008629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:51.272 [2024-09-30 12:30:03.008786] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:51.272 [2024-09-30 12:30:03.008796] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:51.272 [2024-09-30 12:30:03.009087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:51.272 [2024-09-30 12:30:03.009263] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:51.272 [2024-09-30 12:30:03.009277] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:51.272 [2024-09-30 12:30:03.009444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.272 pt4 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.272 "name": "raid_bdev1", 00:12:51.272 "uuid": "aa1ef880-6de3-4548-b364-751e61ddb5b5", 00:12:51.272 "strip_size_kb": 0, 00:12:51.272 "state": "online", 00:12:51.272 "raid_level": "raid1", 00:12:51.272 "superblock": true, 00:12:51.272 "num_base_bdevs": 4, 00:12:51.272 "num_base_bdevs_discovered": 3, 00:12:51.272 "num_base_bdevs_operational": 3, 00:12:51.272 "base_bdevs_list": [ 00:12:51.272 { 00:12:51.272 "name": null, 00:12:51.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.272 "is_configured": false, 00:12:51.272 "data_offset": 2048, 00:12:51.272 "data_size": 63488 00:12:51.272 }, 00:12:51.272 { 00:12:51.272 "name": "pt2", 00:12:51.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:51.272 "is_configured": true, 00:12:51.272 "data_offset": 2048, 00:12:51.272 "data_size": 63488 00:12:51.272 }, 00:12:51.272 { 00:12:51.272 "name": "pt3", 00:12:51.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:51.272 "is_configured": true, 00:12:51.272 "data_offset": 2048, 00:12:51.272 "data_size": 63488 00:12:51.272 }, 00:12:51.272 { 00:12:51.272 "name": "pt4", 00:12:51.272 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:51.272 "is_configured": true, 00:12:51.272 "data_offset": 2048, 00:12:51.272 "data_size": 63488 00:12:51.272 } 00:12:51.272 ] 00:12:51.272 }' 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.272 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:51.841 [2024-09-30 12:30:03.495532] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' aa1ef880-6de3-4548-b364-751e61ddb5b5 '!=' aa1ef880-6de3-4548-b364-751e61ddb5b5 ']' 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74405 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74405 ']' 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74405 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74405 00:12:51.841 killing process with pid 74405 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74405' 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74405 00:12:51.841 [2024-09-30 12:30:03.581580] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.841 [2024-09-30 12:30:03.581673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.841 [2024-09-30 12:30:03.581764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.841 [2024-09-30 12:30:03.581778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:51.841 12:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74405 00:12:52.411 [2024-09-30 12:30:04.001852] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.792 ************************************ 00:12:53.792 END TEST raid_superblock_test 00:12:53.792 ************************************ 00:12:53.792 12:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:53.792 00:12:53.792 real 0m8.816s 00:12:53.792 user 0m13.532s 00:12:53.792 sys 0m1.709s 00:12:53.792 12:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.792 12:30:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.792 12:30:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:53.792 12:30:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:53.792 12:30:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.792 12:30:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.792 ************************************ 00:12:53.792 START TEST raid_read_error_test 00:12:53.792 ************************************ 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uEIkHw3KfL 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74892 00:12:53.792 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:53.793 12:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74892 00:12:53.793 12:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74892 ']' 00:12:53.793 12:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.793 12:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:53.793 12:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.793 12:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:53.793 12:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.793 [2024-09-30 12:30:05.537804] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:53.793 [2024-09-30 12:30:05.538463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74892 ] 00:12:54.052 [2024-09-30 12:30:05.707534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.313 [2024-09-30 12:30:05.955577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.313 [2024-09-30 12:30:06.185245] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.313 [2024-09-30 12:30:06.185280] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.573 BaseBdev1_malloc 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.573 true 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.573 [2024-09-30 12:30:06.432353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:54.573 [2024-09-30 12:30:06.432422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.573 [2024-09-30 12:30:06.432441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:54.573 [2024-09-30 12:30:06.432453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.573 [2024-09-30 12:30:06.434894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.573 [2024-09-30 12:30:06.435001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:54.573 BaseBdev1 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.573 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.835 BaseBdev2_malloc 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.835 true 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.835 [2024-09-30 12:30:06.514955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:54.835 [2024-09-30 12:30:06.515081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.835 [2024-09-30 12:30:06.515103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:54.835 [2024-09-30 12:30:06.515115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.835 [2024-09-30 12:30:06.517508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.835 [2024-09-30 12:30:06.517549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:54.835 BaseBdev2 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.835 BaseBdev3_malloc 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.835 true 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.835 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.835 [2024-09-30 12:30:06.588028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:54.835 [2024-09-30 12:30:06.588150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.835 [2024-09-30 12:30:06.588186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:54.835 [2024-09-30 12:30:06.588197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.835 [2024-09-30 12:30:06.590565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.835 [2024-09-30 12:30:06.590606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:54.835 BaseBdev3 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.836 BaseBdev4_malloc 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.836 true 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.836 [2024-09-30 12:30:06.661204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:54.836 [2024-09-30 12:30:06.661265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.836 [2024-09-30 12:30:06.661298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:54.836 [2024-09-30 12:30:06.661309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.836 [2024-09-30 12:30:06.663677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.836 [2024-09-30 12:30:06.663807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:54.836 BaseBdev4 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.836 [2024-09-30 12:30:06.673293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.836 [2024-09-30 12:30:06.675397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.836 [2024-09-30 12:30:06.675473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.836 [2024-09-30 12:30:06.675532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:54.836 [2024-09-30 12:30:06.675776] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:54.836 [2024-09-30 12:30:06.675796] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:54.836 [2024-09-30 12:30:06.676030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:54.836 [2024-09-30 12:30:06.676200] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:54.836 [2024-09-30 12:30:06.676209] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:54.836 [2024-09-30 12:30:06.676357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.836 "name": "raid_bdev1", 00:12:54.836 "uuid": "cc069ea8-7004-4acb-be0e-69fcd46943cd", 00:12:54.836 "strip_size_kb": 0, 00:12:54.836 "state": "online", 00:12:54.836 "raid_level": "raid1", 00:12:54.836 "superblock": true, 00:12:54.836 "num_base_bdevs": 4, 00:12:54.836 "num_base_bdevs_discovered": 4, 00:12:54.836 "num_base_bdevs_operational": 4, 00:12:54.836 "base_bdevs_list": [ 00:12:54.836 { 00:12:54.836 "name": "BaseBdev1", 00:12:54.836 "uuid": "2f7a639e-a467-5e8a-b29b-ca85c67ad48e", 00:12:54.836 "is_configured": true, 00:12:54.836 "data_offset": 2048, 00:12:54.836 "data_size": 63488 00:12:54.836 }, 00:12:54.836 { 00:12:54.836 "name": "BaseBdev2", 00:12:54.836 "uuid": "dd16b420-515d-5f41-81b2-5e77190daaa5", 00:12:54.836 "is_configured": true, 00:12:54.836 "data_offset": 2048, 00:12:54.836 "data_size": 63488 00:12:54.836 }, 00:12:54.836 { 00:12:54.836 "name": "BaseBdev3", 00:12:54.836 "uuid": "d44bdec9-92ba-55d5-9daf-902459e5ae4f", 00:12:54.836 "is_configured": true, 00:12:54.836 "data_offset": 2048, 00:12:54.836 "data_size": 63488 00:12:54.836 }, 00:12:54.836 { 00:12:54.836 "name": "BaseBdev4", 00:12:54.836 "uuid": "d45108ee-916c-5119-be91-5c0d3608f050", 00:12:54.836 "is_configured": true, 00:12:54.836 "data_offset": 2048, 00:12:54.836 "data_size": 63488 00:12:54.836 } 00:12:54.836 ] 00:12:54.836 }' 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.836 12:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.404 12:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:55.404 12:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:55.404 [2024-09-30 12:30:07.177720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:56.342 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:56.342 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.342 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.343 "name": "raid_bdev1", 00:12:56.343 "uuid": "cc069ea8-7004-4acb-be0e-69fcd46943cd", 00:12:56.343 "strip_size_kb": 0, 00:12:56.343 "state": "online", 00:12:56.343 "raid_level": "raid1", 00:12:56.343 "superblock": true, 00:12:56.343 "num_base_bdevs": 4, 00:12:56.343 "num_base_bdevs_discovered": 4, 00:12:56.343 "num_base_bdevs_operational": 4, 00:12:56.343 "base_bdevs_list": [ 00:12:56.343 { 00:12:56.343 "name": "BaseBdev1", 00:12:56.343 "uuid": "2f7a639e-a467-5e8a-b29b-ca85c67ad48e", 00:12:56.343 "is_configured": true, 00:12:56.343 "data_offset": 2048, 00:12:56.343 "data_size": 63488 00:12:56.343 }, 00:12:56.343 { 00:12:56.343 "name": "BaseBdev2", 00:12:56.343 "uuid": "dd16b420-515d-5f41-81b2-5e77190daaa5", 00:12:56.343 "is_configured": true, 00:12:56.343 "data_offset": 2048, 00:12:56.343 "data_size": 63488 00:12:56.343 }, 00:12:56.343 { 00:12:56.343 "name": "BaseBdev3", 00:12:56.343 "uuid": "d44bdec9-92ba-55d5-9daf-902459e5ae4f", 00:12:56.343 "is_configured": true, 00:12:56.343 "data_offset": 2048, 00:12:56.343 "data_size": 63488 00:12:56.343 }, 00:12:56.343 { 00:12:56.343 "name": "BaseBdev4", 00:12:56.343 "uuid": "d45108ee-916c-5119-be91-5c0d3608f050", 00:12:56.343 "is_configured": true, 00:12:56.343 "data_offset": 2048, 00:12:56.343 "data_size": 63488 00:12:56.343 } 00:12:56.343 ] 00:12:56.343 }' 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.343 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.912 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.912 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.912 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.912 [2024-09-30 12:30:08.579908] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.912 [2024-09-30 12:30:08.580040] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.912 [2024-09-30 12:30:08.582702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.912 [2024-09-30 12:30:08.582841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.912 [2024-09-30 12:30:08.583008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.912 [2024-09-30 12:30:08.583059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:56.912 { 00:12:56.912 "results": [ 00:12:56.912 { 00:12:56.912 "job": "raid_bdev1", 00:12:56.912 "core_mask": "0x1", 00:12:56.912 "workload": "randrw", 00:12:56.912 "percentage": 50, 00:12:56.912 "status": "finished", 00:12:56.912 "queue_depth": 1, 00:12:56.912 "io_size": 131072, 00:12:56.912 "runtime": 1.402898, 00:12:56.912 "iops": 7912.905998868057, 00:12:56.912 "mibps": 989.1132498585072, 00:12:56.912 "io_failed": 0, 00:12:56.912 "io_timeout": 0, 00:12:56.912 "avg_latency_us": 123.83852432350994, 00:12:56.912 "min_latency_us": 22.358078602620086, 00:12:56.912 "max_latency_us": 1595.4724890829693 00:12:56.912 } 00:12:56.912 ], 00:12:56.912 "core_count": 1 00:12:56.912 } 00:12:56.912 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.912 12:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74892 00:12:56.912 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74892 ']' 00:12:56.912 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74892 00:12:56.912 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:56.912 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.912 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74892 00:12:56.913 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:56.913 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:56.913 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74892' 00:12:56.913 killing process with pid 74892 00:12:56.913 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74892 00:12:56.913 [2024-09-30 12:30:08.629795] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.913 12:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74892 00:12:57.172 [2024-09-30 12:30:08.977279] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.552 12:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uEIkHw3KfL 00:12:58.552 12:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:58.552 12:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:58.552 12:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:58.552 12:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:58.552 12:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.552 12:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:58.552 12:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:58.552 00:12:58.552 real 0m4.965s 00:12:58.552 user 0m5.635s 00:12:58.552 sys 0m0.742s 00:12:58.552 12:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.553 12:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.553 ************************************ 00:12:58.553 END TEST raid_read_error_test 00:12:58.553 ************************************ 00:12:58.813 12:30:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:58.813 12:30:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:58.813 12:30:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.813 12:30:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.813 ************************************ 00:12:58.813 START TEST raid_write_error_test 00:12:58.813 ************************************ 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LUmXVucA7o 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75038 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75038 00:12:58.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75038 ']' 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.813 12:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.813 [2024-09-30 12:30:10.577957] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:58.813 [2024-09-30 12:30:10.578077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75038 ] 00:12:59.074 [2024-09-30 12:30:10.746988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.333 [2024-09-30 12:30:10.991799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.333 [2024-09-30 12:30:11.219069] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.333 [2024-09-30 12:30:11.219207] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.593 BaseBdev1_malloc 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.593 true 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.593 [2024-09-30 12:30:11.463522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:59.593 [2024-09-30 12:30:11.463662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.593 [2024-09-30 12:30:11.463684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:59.593 [2024-09-30 12:30:11.463696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.593 [2024-09-30 12:30:11.466165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.593 [2024-09-30 12:30:11.466205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:59.593 BaseBdev1 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.593 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.853 BaseBdev2_malloc 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.853 true 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.853 [2024-09-30 12:30:11.560646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:59.853 [2024-09-30 12:30:11.560821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.853 [2024-09-30 12:30:11.560874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:59.853 [2024-09-30 12:30:11.560909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.853 [2024-09-30 12:30:11.563248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.853 [2024-09-30 12:30:11.563326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:59.853 BaseBdev2 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.853 BaseBdev3_malloc 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.853 true 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.853 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.853 [2024-09-30 12:30:11.632455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:59.853 [2024-09-30 12:30:11.632513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.854 [2024-09-30 12:30:11.632530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:59.854 [2024-09-30 12:30:11.632542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.854 [2024-09-30 12:30:11.634897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.854 [2024-09-30 12:30:11.634935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:59.854 BaseBdev3 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.854 BaseBdev4_malloc 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.854 true 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.854 [2024-09-30 12:30:11.704995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:59.854 [2024-09-30 12:30:11.705105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.854 [2024-09-30 12:30:11.705154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:59.854 [2024-09-30 12:30:11.705207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.854 [2024-09-30 12:30:11.707563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.854 [2024-09-30 12:30:11.707641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:59.854 BaseBdev4 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.854 [2024-09-30 12:30:11.717060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.854 [2024-09-30 12:30:11.719173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.854 [2024-09-30 12:30:11.719302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.854 [2024-09-30 12:30:11.719405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:59.854 [2024-09-30 12:30:11.719634] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:59.854 [2024-09-30 12:30:11.719650] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.854 [2024-09-30 12:30:11.719907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:59.854 [2024-09-30 12:30:11.720082] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:59.854 [2024-09-30 12:30:11.720092] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:59.854 [2024-09-30 12:30:11.720246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.854 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.113 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.113 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.113 "name": "raid_bdev1", 00:13:00.113 "uuid": "f7a937ba-ef1f-40bb-901b-8afba661554b", 00:13:00.113 "strip_size_kb": 0, 00:13:00.113 "state": "online", 00:13:00.113 "raid_level": "raid1", 00:13:00.113 "superblock": true, 00:13:00.113 "num_base_bdevs": 4, 00:13:00.113 "num_base_bdevs_discovered": 4, 00:13:00.113 "num_base_bdevs_operational": 4, 00:13:00.113 "base_bdevs_list": [ 00:13:00.113 { 00:13:00.113 "name": "BaseBdev1", 00:13:00.113 "uuid": "e3e4830b-f9a7-5f5a-b0d8-7820cefc4ab2", 00:13:00.113 "is_configured": true, 00:13:00.113 "data_offset": 2048, 00:13:00.113 "data_size": 63488 00:13:00.113 }, 00:13:00.113 { 00:13:00.113 "name": "BaseBdev2", 00:13:00.113 "uuid": "c1a47bd8-7ba0-5aeb-accc-a368b2c58916", 00:13:00.113 "is_configured": true, 00:13:00.113 "data_offset": 2048, 00:13:00.113 "data_size": 63488 00:13:00.113 }, 00:13:00.113 { 00:13:00.113 "name": "BaseBdev3", 00:13:00.113 "uuid": "e0e1c7e8-2e29-5361-bae8-3100c492c34d", 00:13:00.113 "is_configured": true, 00:13:00.113 "data_offset": 2048, 00:13:00.113 "data_size": 63488 00:13:00.113 }, 00:13:00.113 { 00:13:00.113 "name": "BaseBdev4", 00:13:00.113 "uuid": "af192315-b577-5c97-8e46-c2c2e26b62a8", 00:13:00.113 "is_configured": true, 00:13:00.113 "data_offset": 2048, 00:13:00.113 "data_size": 63488 00:13:00.113 } 00:13:00.113 ] 00:13:00.113 }' 00:13:00.113 12:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.113 12:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.373 12:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:00.373 12:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:00.373 [2024-09-30 12:30:12.205725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.330 [2024-09-30 12:30:13.143098] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:01.330 [2024-09-30 12:30:13.143265] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.330 [2024-09-30 12:30:13.143604] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.330 "name": "raid_bdev1", 00:13:01.330 "uuid": "f7a937ba-ef1f-40bb-901b-8afba661554b", 00:13:01.330 "strip_size_kb": 0, 00:13:01.330 "state": "online", 00:13:01.330 "raid_level": "raid1", 00:13:01.330 "superblock": true, 00:13:01.330 "num_base_bdevs": 4, 00:13:01.330 "num_base_bdevs_discovered": 3, 00:13:01.330 "num_base_bdevs_operational": 3, 00:13:01.330 "base_bdevs_list": [ 00:13:01.330 { 00:13:01.330 "name": null, 00:13:01.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.330 "is_configured": false, 00:13:01.330 "data_offset": 0, 00:13:01.330 "data_size": 63488 00:13:01.330 }, 00:13:01.330 { 00:13:01.330 "name": "BaseBdev2", 00:13:01.330 "uuid": "c1a47bd8-7ba0-5aeb-accc-a368b2c58916", 00:13:01.330 "is_configured": true, 00:13:01.330 "data_offset": 2048, 00:13:01.330 "data_size": 63488 00:13:01.330 }, 00:13:01.330 { 00:13:01.330 "name": "BaseBdev3", 00:13:01.330 "uuid": "e0e1c7e8-2e29-5361-bae8-3100c492c34d", 00:13:01.330 "is_configured": true, 00:13:01.330 "data_offset": 2048, 00:13:01.330 "data_size": 63488 00:13:01.330 }, 00:13:01.330 { 00:13:01.330 "name": "BaseBdev4", 00:13:01.330 "uuid": "af192315-b577-5c97-8e46-c2c2e26b62a8", 00:13:01.330 "is_configured": true, 00:13:01.330 "data_offset": 2048, 00:13:01.330 "data_size": 63488 00:13:01.330 } 00:13:01.330 ] 00:13:01.330 }' 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.330 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.900 [2024-09-30 12:30:13.548778] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.900 [2024-09-30 12:30:13.548821] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.900 [2024-09-30 12:30:13.551585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.900 [2024-09-30 12:30:13.551673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.900 [2024-09-30 12:30:13.551834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.900 [2024-09-30 12:30:13.551885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:01.900 { 00:13:01.900 "results": [ 00:13:01.900 { 00:13:01.900 "job": "raid_bdev1", 00:13:01.900 "core_mask": "0x1", 00:13:01.900 "workload": "randrw", 00:13:01.900 "percentage": 50, 00:13:01.900 "status": "finished", 00:13:01.900 "queue_depth": 1, 00:13:01.900 "io_size": 131072, 00:13:01.900 "runtime": 1.343545, 00:13:01.900 "iops": 8709.04956663156, 00:13:01.900 "mibps": 1088.631195828945, 00:13:01.900 "io_failed": 0, 00:13:01.900 "io_timeout": 0, 00:13:01.900 "avg_latency_us": 112.2886871536005, 00:13:01.900 "min_latency_us": 22.022707423580787, 00:13:01.900 "max_latency_us": 1459.5353711790392 00:13:01.900 } 00:13:01.900 ], 00:13:01.900 "core_count": 1 00:13:01.900 } 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75038 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75038 ']' 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75038 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75038 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75038' 00:13:01.900 killing process with pid 75038 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75038 00:13:01.900 [2024-09-30 12:30:13.598628] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:01.900 12:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75038 00:13:02.160 [2024-09-30 12:30:13.945136] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.651 12:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LUmXVucA7o 00:13:03.651 12:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:03.651 12:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:03.651 12:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:03.651 12:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:03.651 ************************************ 00:13:03.651 END TEST raid_write_error_test 00:13:03.651 ************************************ 00:13:03.651 12:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:03.651 12:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:03.651 12:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:03.651 00:13:03.651 real 0m4.886s 00:13:03.651 user 0m5.495s 00:13:03.651 sys 0m0.702s 00:13:03.651 12:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.651 12:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.651 12:30:15 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:03.651 12:30:15 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:03.651 12:30:15 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:03.651 12:30:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:03.651 12:30:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.651 12:30:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.651 ************************************ 00:13:03.651 START TEST raid_rebuild_test 00:13:03.651 ************************************ 00:13:03.651 12:30:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:13:03.651 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:03.651 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:03.651 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:03.651 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:03.651 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:03.651 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75187 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75187 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75187 ']' 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.652 12:30:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.652 [2024-09-30 12:30:15.533059] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:03.652 [2024-09-30 12:30:15.533226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:03.652 Zero copy mechanism will not be used. 00:13:03.652 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75187 ] 00:13:03.911 [2024-09-30 12:30:15.697569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.171 [2024-09-30 12:30:15.943650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.431 [2024-09-30 12:30:16.172096] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.431 [2024-09-30 12:30:16.172231] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.691 BaseBdev1_malloc 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.691 [2024-09-30 12:30:16.400055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:04.691 [2024-09-30 12:30:16.400209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.691 [2024-09-30 12:30:16.400253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:04.691 [2024-09-30 12:30:16.400295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.691 [2024-09-30 12:30:16.402803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.691 [2024-09-30 12:30:16.402879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.691 BaseBdev1 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.691 BaseBdev2_malloc 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.691 [2024-09-30 12:30:16.471168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:04.691 [2024-09-30 12:30:16.471292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.691 [2024-09-30 12:30:16.471335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:04.691 [2024-09-30 12:30:16.471392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.691 [2024-09-30 12:30:16.473862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.691 [2024-09-30 12:30:16.473938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:04.691 BaseBdev2 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.691 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.692 spare_malloc 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.692 spare_delay 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.692 [2024-09-30 12:30:16.543873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:04.692 [2024-09-30 12:30:16.544005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.692 [2024-09-30 12:30:16.544044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:04.692 [2024-09-30 12:30:16.544058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.692 [2024-09-30 12:30:16.546435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.692 [2024-09-30 12:30:16.546476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:04.692 spare 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.692 [2024-09-30 12:30:16.555903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.692 [2024-09-30 12:30:16.557954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.692 [2024-09-30 12:30:16.558041] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:04.692 [2024-09-30 12:30:16.558053] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:04.692 [2024-09-30 12:30:16.558320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:04.692 [2024-09-30 12:30:16.558484] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:04.692 [2024-09-30 12:30:16.558503] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:04.692 [2024-09-30 12:30:16.558646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.692 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.951 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.951 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.951 "name": "raid_bdev1", 00:13:04.951 "uuid": "42245ae9-d628-4669-91f5-db86d5e45831", 00:13:04.951 "strip_size_kb": 0, 00:13:04.951 "state": "online", 00:13:04.951 "raid_level": "raid1", 00:13:04.951 "superblock": false, 00:13:04.951 "num_base_bdevs": 2, 00:13:04.951 "num_base_bdevs_discovered": 2, 00:13:04.951 "num_base_bdevs_operational": 2, 00:13:04.951 "base_bdevs_list": [ 00:13:04.951 { 00:13:04.951 "name": "BaseBdev1", 00:13:04.951 "uuid": "8b15289e-9365-58ce-8b6d-b24bbd9ed6a3", 00:13:04.951 "is_configured": true, 00:13:04.951 "data_offset": 0, 00:13:04.951 "data_size": 65536 00:13:04.951 }, 00:13:04.951 { 00:13:04.951 "name": "BaseBdev2", 00:13:04.951 "uuid": "a09400a1-5bba-547e-a8d4-16709736a8d9", 00:13:04.951 "is_configured": true, 00:13:04.951 "data_offset": 0, 00:13:04.951 "data_size": 65536 00:13:04.951 } 00:13:04.951 ] 00:13:04.951 }' 00:13:04.951 12:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.951 12:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.211 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:05.211 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.211 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.211 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:05.211 [2024-09-30 12:30:17.047418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.211 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.211 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:05.211 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:05.211 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.211 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.211 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.470 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:05.470 [2024-09-30 12:30:17.326714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:05.470 /dev/nbd0 00:13:05.729 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:05.729 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:05.729 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:05.729 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:05.729 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:05.729 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:05.729 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:05.729 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:05.729 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:05.729 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:05.730 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.730 1+0 records in 00:13:05.730 1+0 records out 00:13:05.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575939 s, 7.1 MB/s 00:13:05.730 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.730 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:05.730 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.730 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:05.730 12:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:05.730 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.730 12:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.730 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:05.730 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:05.730 12:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:09.922 65536+0 records in 00:13:09.922 65536+0 records out 00:13:09.922 33554432 bytes (34 MB, 32 MiB) copied, 4.0951 s, 8.2 MB/s 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:09.922 [2024-09-30 12:30:21.700492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.922 [2024-09-30 12:30:21.736522] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.922 "name": "raid_bdev1", 00:13:09.922 "uuid": "42245ae9-d628-4669-91f5-db86d5e45831", 00:13:09.922 "strip_size_kb": 0, 00:13:09.922 "state": "online", 00:13:09.922 "raid_level": "raid1", 00:13:09.922 "superblock": false, 00:13:09.922 "num_base_bdevs": 2, 00:13:09.922 "num_base_bdevs_discovered": 1, 00:13:09.922 "num_base_bdevs_operational": 1, 00:13:09.922 "base_bdevs_list": [ 00:13:09.922 { 00:13:09.922 "name": null, 00:13:09.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.922 "is_configured": false, 00:13:09.922 "data_offset": 0, 00:13:09.922 "data_size": 65536 00:13:09.922 }, 00:13:09.922 { 00:13:09.922 "name": "BaseBdev2", 00:13:09.922 "uuid": "a09400a1-5bba-547e-a8d4-16709736a8d9", 00:13:09.922 "is_configured": true, 00:13:09.922 "data_offset": 0, 00:13:09.922 "data_size": 65536 00:13:09.922 } 00:13:09.922 ] 00:13:09.922 }' 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.922 12:30:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.491 12:30:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:10.491 12:30:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.491 12:30:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.491 [2024-09-30 12:30:22.195782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.491 [2024-09-30 12:30:22.212468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:10.491 12:30:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.491 12:30:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:10.491 [2024-09-30 12:30:22.214582] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.427 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.427 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.428 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.428 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.428 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.428 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.428 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.428 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.428 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.428 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.428 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.428 "name": "raid_bdev1", 00:13:11.428 "uuid": "42245ae9-d628-4669-91f5-db86d5e45831", 00:13:11.428 "strip_size_kb": 0, 00:13:11.428 "state": "online", 00:13:11.428 "raid_level": "raid1", 00:13:11.428 "superblock": false, 00:13:11.428 "num_base_bdevs": 2, 00:13:11.428 "num_base_bdevs_discovered": 2, 00:13:11.428 "num_base_bdevs_operational": 2, 00:13:11.428 "process": { 00:13:11.428 "type": "rebuild", 00:13:11.428 "target": "spare", 00:13:11.428 "progress": { 00:13:11.428 "blocks": 20480, 00:13:11.428 "percent": 31 00:13:11.428 } 00:13:11.428 }, 00:13:11.428 "base_bdevs_list": [ 00:13:11.428 { 00:13:11.428 "name": "spare", 00:13:11.428 "uuid": "de0f76bd-39e0-59ed-bc37-843634a952e7", 00:13:11.428 "is_configured": true, 00:13:11.428 "data_offset": 0, 00:13:11.428 "data_size": 65536 00:13:11.428 }, 00:13:11.428 { 00:13:11.428 "name": "BaseBdev2", 00:13:11.428 "uuid": "a09400a1-5bba-547e-a8d4-16709736a8d9", 00:13:11.428 "is_configured": true, 00:13:11.428 "data_offset": 0, 00:13:11.428 "data_size": 65536 00:13:11.428 } 00:13:11.428 ] 00:13:11.428 }' 00:13:11.428 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.428 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.687 [2024-09-30 12:30:23.365826] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.687 [2024-09-30 12:30:23.423388] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:11.687 [2024-09-30 12:30:23.423450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.687 [2024-09-30 12:30:23.423465] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.687 [2024-09-30 12:30:23.423476] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.687 "name": "raid_bdev1", 00:13:11.687 "uuid": "42245ae9-d628-4669-91f5-db86d5e45831", 00:13:11.687 "strip_size_kb": 0, 00:13:11.687 "state": "online", 00:13:11.687 "raid_level": "raid1", 00:13:11.687 "superblock": false, 00:13:11.687 "num_base_bdevs": 2, 00:13:11.687 "num_base_bdevs_discovered": 1, 00:13:11.687 "num_base_bdevs_operational": 1, 00:13:11.687 "base_bdevs_list": [ 00:13:11.687 { 00:13:11.687 "name": null, 00:13:11.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.687 "is_configured": false, 00:13:11.687 "data_offset": 0, 00:13:11.687 "data_size": 65536 00:13:11.687 }, 00:13:11.687 { 00:13:11.687 "name": "BaseBdev2", 00:13:11.687 "uuid": "a09400a1-5bba-547e-a8d4-16709736a8d9", 00:13:11.687 "is_configured": true, 00:13:11.687 "data_offset": 0, 00:13:11.687 "data_size": 65536 00:13:11.687 } 00:13:11.687 ] 00:13:11.687 }' 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.687 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.256 "name": "raid_bdev1", 00:13:12.256 "uuid": "42245ae9-d628-4669-91f5-db86d5e45831", 00:13:12.256 "strip_size_kb": 0, 00:13:12.256 "state": "online", 00:13:12.256 "raid_level": "raid1", 00:13:12.256 "superblock": false, 00:13:12.256 "num_base_bdevs": 2, 00:13:12.256 "num_base_bdevs_discovered": 1, 00:13:12.256 "num_base_bdevs_operational": 1, 00:13:12.256 "base_bdevs_list": [ 00:13:12.256 { 00:13:12.256 "name": null, 00:13:12.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.256 "is_configured": false, 00:13:12.256 "data_offset": 0, 00:13:12.256 "data_size": 65536 00:13:12.256 }, 00:13:12.256 { 00:13:12.256 "name": "BaseBdev2", 00:13:12.256 "uuid": "a09400a1-5bba-547e-a8d4-16709736a8d9", 00:13:12.256 "is_configured": true, 00:13:12.256 "data_offset": 0, 00:13:12.256 "data_size": 65536 00:13:12.256 } 00:13:12.256 ] 00:13:12.256 }' 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.256 12:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.256 12:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.256 12:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.256 12:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.256 12:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.257 [2024-09-30 12:30:24.016887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.257 [2024-09-30 12:30:24.032833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:12.257 12:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.257 12:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:12.257 [2024-09-30 12:30:24.035017] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:13.194 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.194 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.194 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.194 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.195 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.195 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.195 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.195 12:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.195 12:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.195 12:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.454 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.454 "name": "raid_bdev1", 00:13:13.454 "uuid": "42245ae9-d628-4669-91f5-db86d5e45831", 00:13:13.454 "strip_size_kb": 0, 00:13:13.454 "state": "online", 00:13:13.454 "raid_level": "raid1", 00:13:13.454 "superblock": false, 00:13:13.454 "num_base_bdevs": 2, 00:13:13.454 "num_base_bdevs_discovered": 2, 00:13:13.454 "num_base_bdevs_operational": 2, 00:13:13.454 "process": { 00:13:13.454 "type": "rebuild", 00:13:13.454 "target": "spare", 00:13:13.454 "progress": { 00:13:13.454 "blocks": 20480, 00:13:13.454 "percent": 31 00:13:13.454 } 00:13:13.454 }, 00:13:13.454 "base_bdevs_list": [ 00:13:13.454 { 00:13:13.454 "name": "spare", 00:13:13.454 "uuid": "de0f76bd-39e0-59ed-bc37-843634a952e7", 00:13:13.454 "is_configured": true, 00:13:13.454 "data_offset": 0, 00:13:13.454 "data_size": 65536 00:13:13.454 }, 00:13:13.454 { 00:13:13.454 "name": "BaseBdev2", 00:13:13.454 "uuid": "a09400a1-5bba-547e-a8d4-16709736a8d9", 00:13:13.454 "is_configured": true, 00:13:13.454 "data_offset": 0, 00:13:13.454 "data_size": 65536 00:13:13.454 } 00:13:13.454 ] 00:13:13.454 }' 00:13:13.454 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.454 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.454 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.454 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.454 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:13.454 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:13.454 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:13.454 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:13.454 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=370 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.455 "name": "raid_bdev1", 00:13:13.455 "uuid": "42245ae9-d628-4669-91f5-db86d5e45831", 00:13:13.455 "strip_size_kb": 0, 00:13:13.455 "state": "online", 00:13:13.455 "raid_level": "raid1", 00:13:13.455 "superblock": false, 00:13:13.455 "num_base_bdevs": 2, 00:13:13.455 "num_base_bdevs_discovered": 2, 00:13:13.455 "num_base_bdevs_operational": 2, 00:13:13.455 "process": { 00:13:13.455 "type": "rebuild", 00:13:13.455 "target": "spare", 00:13:13.455 "progress": { 00:13:13.455 "blocks": 22528, 00:13:13.455 "percent": 34 00:13:13.455 } 00:13:13.455 }, 00:13:13.455 "base_bdevs_list": [ 00:13:13.455 { 00:13:13.455 "name": "spare", 00:13:13.455 "uuid": "de0f76bd-39e0-59ed-bc37-843634a952e7", 00:13:13.455 "is_configured": true, 00:13:13.455 "data_offset": 0, 00:13:13.455 "data_size": 65536 00:13:13.455 }, 00:13:13.455 { 00:13:13.455 "name": "BaseBdev2", 00:13:13.455 "uuid": "a09400a1-5bba-547e-a8d4-16709736a8d9", 00:13:13.455 "is_configured": true, 00:13:13.455 "data_offset": 0, 00:13:13.455 "data_size": 65536 00:13:13.455 } 00:13:13.455 ] 00:13:13.455 }' 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.455 12:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.836 "name": "raid_bdev1", 00:13:14.836 "uuid": "42245ae9-d628-4669-91f5-db86d5e45831", 00:13:14.836 "strip_size_kb": 0, 00:13:14.836 "state": "online", 00:13:14.836 "raid_level": "raid1", 00:13:14.836 "superblock": false, 00:13:14.836 "num_base_bdevs": 2, 00:13:14.836 "num_base_bdevs_discovered": 2, 00:13:14.836 "num_base_bdevs_operational": 2, 00:13:14.836 "process": { 00:13:14.836 "type": "rebuild", 00:13:14.836 "target": "spare", 00:13:14.836 "progress": { 00:13:14.836 "blocks": 47104, 00:13:14.836 "percent": 71 00:13:14.836 } 00:13:14.836 }, 00:13:14.836 "base_bdevs_list": [ 00:13:14.836 { 00:13:14.836 "name": "spare", 00:13:14.836 "uuid": "de0f76bd-39e0-59ed-bc37-843634a952e7", 00:13:14.836 "is_configured": true, 00:13:14.836 "data_offset": 0, 00:13:14.836 "data_size": 65536 00:13:14.836 }, 00:13:14.836 { 00:13:14.836 "name": "BaseBdev2", 00:13:14.836 "uuid": "a09400a1-5bba-547e-a8d4-16709736a8d9", 00:13:14.836 "is_configured": true, 00:13:14.836 "data_offset": 0, 00:13:14.836 "data_size": 65536 00:13:14.836 } 00:13:14.836 ] 00:13:14.836 }' 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.836 12:30:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.406 [2024-09-30 12:30:27.258473] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:15.406 [2024-09-30 12:30:27.258589] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:15.406 [2024-09-30 12:30:27.258647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.666 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.666 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.666 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.666 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.666 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.667 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.667 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.667 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.667 12:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.667 12:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.667 12:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.667 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.667 "name": "raid_bdev1", 00:13:15.667 "uuid": "42245ae9-d628-4669-91f5-db86d5e45831", 00:13:15.667 "strip_size_kb": 0, 00:13:15.667 "state": "online", 00:13:15.667 "raid_level": "raid1", 00:13:15.667 "superblock": false, 00:13:15.667 "num_base_bdevs": 2, 00:13:15.667 "num_base_bdevs_discovered": 2, 00:13:15.667 "num_base_bdevs_operational": 2, 00:13:15.667 "base_bdevs_list": [ 00:13:15.667 { 00:13:15.667 "name": "spare", 00:13:15.667 "uuid": "de0f76bd-39e0-59ed-bc37-843634a952e7", 00:13:15.667 "is_configured": true, 00:13:15.667 "data_offset": 0, 00:13:15.667 "data_size": 65536 00:13:15.667 }, 00:13:15.667 { 00:13:15.667 "name": "BaseBdev2", 00:13:15.667 "uuid": "a09400a1-5bba-547e-a8d4-16709736a8d9", 00:13:15.667 "is_configured": true, 00:13:15.667 "data_offset": 0, 00:13:15.667 "data_size": 65536 00:13:15.667 } 00:13:15.667 ] 00:13:15.667 }' 00:13:15.667 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.667 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:15.667 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.927 "name": "raid_bdev1", 00:13:15.927 "uuid": "42245ae9-d628-4669-91f5-db86d5e45831", 00:13:15.927 "strip_size_kb": 0, 00:13:15.927 "state": "online", 00:13:15.927 "raid_level": "raid1", 00:13:15.927 "superblock": false, 00:13:15.927 "num_base_bdevs": 2, 00:13:15.927 "num_base_bdevs_discovered": 2, 00:13:15.927 "num_base_bdevs_operational": 2, 00:13:15.927 "base_bdevs_list": [ 00:13:15.927 { 00:13:15.927 "name": "spare", 00:13:15.927 "uuid": "de0f76bd-39e0-59ed-bc37-843634a952e7", 00:13:15.927 "is_configured": true, 00:13:15.927 "data_offset": 0, 00:13:15.927 "data_size": 65536 00:13:15.927 }, 00:13:15.927 { 00:13:15.927 "name": "BaseBdev2", 00:13:15.927 "uuid": "a09400a1-5bba-547e-a8d4-16709736a8d9", 00:13:15.927 "is_configured": true, 00:13:15.927 "data_offset": 0, 00:13:15.927 "data_size": 65536 00:13:15.927 } 00:13:15.927 ] 00:13:15.927 }' 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.927 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.927 "name": "raid_bdev1", 00:13:15.927 "uuid": "42245ae9-d628-4669-91f5-db86d5e45831", 00:13:15.927 "strip_size_kb": 0, 00:13:15.927 "state": "online", 00:13:15.927 "raid_level": "raid1", 00:13:15.927 "superblock": false, 00:13:15.927 "num_base_bdevs": 2, 00:13:15.927 "num_base_bdevs_discovered": 2, 00:13:15.927 "num_base_bdevs_operational": 2, 00:13:15.927 "base_bdevs_list": [ 00:13:15.927 { 00:13:15.927 "name": "spare", 00:13:15.928 "uuid": "de0f76bd-39e0-59ed-bc37-843634a952e7", 00:13:15.928 "is_configured": true, 00:13:15.928 "data_offset": 0, 00:13:15.928 "data_size": 65536 00:13:15.928 }, 00:13:15.928 { 00:13:15.928 "name": "BaseBdev2", 00:13:15.928 "uuid": "a09400a1-5bba-547e-a8d4-16709736a8d9", 00:13:15.928 "is_configured": true, 00:13:15.928 "data_offset": 0, 00:13:15.928 "data_size": 65536 00:13:15.928 } 00:13:15.928 ] 00:13:15.928 }' 00:13:15.928 12:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.928 12:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.496 [2024-09-30 12:30:28.175198] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.496 [2024-09-30 12:30:28.175292] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.496 [2024-09-30 12:30:28.175429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.496 [2024-09-30 12:30:28.175510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.496 [2024-09-30 12:30:28.175519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:16.496 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:16.756 /dev/nbd0 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.756 1+0 records in 00:13:16.756 1+0 records out 00:13:16.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481674 s, 8.5 MB/s 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:16.756 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:17.015 /dev/nbd1 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.015 1+0 records in 00:13:17.015 1+0 records out 00:13:17.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527721 s, 7.8 MB/s 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:17.015 12:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:17.285 12:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:17.285 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.285 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:17.285 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.286 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:17.286 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.286 12:30:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:17.286 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:17.286 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:17.286 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:17.286 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.286 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.286 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:17.286 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:17.286 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.286 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.286 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75187 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75187 ']' 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75187 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75187 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:17.547 killing process with pid 75187 00:13:17.547 Received shutdown signal, test time was about 60.000000 seconds 00:13:17.547 00:13:17.547 Latency(us) 00:13:17.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.547 =================================================================================================================== 00:13:17.547 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75187' 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75187 00:13:17.547 [2024-09-30 12:30:29.414437] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:17.547 12:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75187 00:13:18.116 [2024-09-30 12:30:29.731628] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:19.497 00:13:19.497 real 0m15.611s 00:13:19.497 user 0m17.313s 00:13:19.497 sys 0m3.179s 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.497 ************************************ 00:13:19.497 END TEST raid_rebuild_test 00:13:19.497 ************************************ 00:13:19.497 12:30:31 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:19.497 12:30:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:19.497 12:30:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.497 12:30:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.497 ************************************ 00:13:19.497 START TEST raid_rebuild_test_sb 00:13:19.497 ************************************ 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75605 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75605 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75605 ']' 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.497 12:30:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.497 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:19.497 Zero copy mechanism will not be used. 00:13:19.497 [2024-09-30 12:30:31.229273] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:19.497 [2024-09-30 12:30:31.229386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75605 ] 00:13:19.757 [2024-09-30 12:30:31.394076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.757 [2024-09-30 12:30:31.635947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.017 [2024-09-30 12:30:31.867181] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.017 [2024-09-30 12:30:31.867289] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.276 BaseBdev1_malloc 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.276 [2024-09-30 12:30:32.100729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:20.276 [2024-09-30 12:30:32.100819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.276 [2024-09-30 12:30:32.100848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:20.276 [2024-09-30 12:30:32.100864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.276 [2024-09-30 12:30:32.103270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.276 [2024-09-30 12:30:32.103422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:20.276 BaseBdev1 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.276 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.536 BaseBdev2_malloc 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.536 [2024-09-30 12:30:32.190730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:20.536 [2024-09-30 12:30:32.190809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.536 [2024-09-30 12:30:32.190832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:20.536 [2024-09-30 12:30:32.190844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.536 [2024-09-30 12:30:32.193257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.536 [2024-09-30 12:30:32.193299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:20.536 BaseBdev2 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.536 spare_malloc 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.536 spare_delay 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.536 [2024-09-30 12:30:32.259811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:20.536 [2024-09-30 12:30:32.259951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.536 [2024-09-30 12:30:32.259975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:20.536 [2024-09-30 12:30:32.259987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.536 [2024-09-30 12:30:32.262374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.536 [2024-09-30 12:30:32.262415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:20.536 spare 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.536 [2024-09-30 12:30:32.271854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.536 [2024-09-30 12:30:32.273890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.536 [2024-09-30 12:30:32.274060] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:20.536 [2024-09-30 12:30:32.274080] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:20.536 [2024-09-30 12:30:32.274353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:20.536 [2024-09-30 12:30:32.274538] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:20.536 [2024-09-30 12:30:32.274548] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:20.536 [2024-09-30 12:30:32.274699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.536 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.537 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.537 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.537 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.537 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.537 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.537 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.537 "name": "raid_bdev1", 00:13:20.537 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:20.537 "strip_size_kb": 0, 00:13:20.537 "state": "online", 00:13:20.537 "raid_level": "raid1", 00:13:20.537 "superblock": true, 00:13:20.537 "num_base_bdevs": 2, 00:13:20.537 "num_base_bdevs_discovered": 2, 00:13:20.537 "num_base_bdevs_operational": 2, 00:13:20.537 "base_bdevs_list": [ 00:13:20.537 { 00:13:20.537 "name": "BaseBdev1", 00:13:20.537 "uuid": "201ed2e1-e404-5b7c-ae93-4966caad0298", 00:13:20.537 "is_configured": true, 00:13:20.537 "data_offset": 2048, 00:13:20.537 "data_size": 63488 00:13:20.537 }, 00:13:20.537 { 00:13:20.537 "name": "BaseBdev2", 00:13:20.537 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:20.537 "is_configured": true, 00:13:20.537 "data_offset": 2048, 00:13:20.537 "data_size": 63488 00:13:20.537 } 00:13:20.537 ] 00:13:20.537 }' 00:13:20.537 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.537 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:21.103 [2024-09-30 12:30:32.739367] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:21.103 12:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:21.362 [2024-09-30 12:30:32.998716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:21.362 /dev/nbd0 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.362 1+0 records in 00:13:21.362 1+0 records out 00:13:21.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375481 s, 10.9 MB/s 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:21.362 12:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:25.559 63488+0 records in 00:13:25.559 63488+0 records out 00:13:25.559 32505856 bytes (33 MB, 31 MiB) copied, 3.67637 s, 8.8 MB/s 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.559 [2024-09-30 12:30:36.940034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 [2024-09-30 12:30:36.956114] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 12:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.559 12:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.559 "name": "raid_bdev1", 00:13:25.559 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:25.559 "strip_size_kb": 0, 00:13:25.559 "state": "online", 00:13:25.559 "raid_level": "raid1", 00:13:25.559 "superblock": true, 00:13:25.559 "num_base_bdevs": 2, 00:13:25.559 "num_base_bdevs_discovered": 1, 00:13:25.559 "num_base_bdevs_operational": 1, 00:13:25.559 "base_bdevs_list": [ 00:13:25.559 { 00:13:25.559 "name": null, 00:13:25.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.559 "is_configured": false, 00:13:25.559 "data_offset": 0, 00:13:25.559 "data_size": 63488 00:13:25.559 }, 00:13:25.559 { 00:13:25.559 "name": "BaseBdev2", 00:13:25.559 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:25.559 "is_configured": true, 00:13:25.559 "data_offset": 2048, 00:13:25.559 "data_size": 63488 00:13:25.559 } 00:13:25.559 ] 00:13:25.559 }' 00:13:25.559 12:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.559 12:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 12:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:25.559 12:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.559 12:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 [2024-09-30 12:30:37.387468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.559 [2024-09-30 12:30:37.404871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:25.559 12:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.559 12:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:25.559 [2024-09-30 12:30:37.406976] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:26.939 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.939 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.939 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.939 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.939 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.939 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.939 12:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.939 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.939 12:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.939 12:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.939 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.939 "name": "raid_bdev1", 00:13:26.939 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:26.939 "strip_size_kb": 0, 00:13:26.939 "state": "online", 00:13:26.939 "raid_level": "raid1", 00:13:26.939 "superblock": true, 00:13:26.939 "num_base_bdevs": 2, 00:13:26.939 "num_base_bdevs_discovered": 2, 00:13:26.939 "num_base_bdevs_operational": 2, 00:13:26.939 "process": { 00:13:26.939 "type": "rebuild", 00:13:26.939 "target": "spare", 00:13:26.939 "progress": { 00:13:26.939 "blocks": 20480, 00:13:26.940 "percent": 32 00:13:26.940 } 00:13:26.940 }, 00:13:26.940 "base_bdevs_list": [ 00:13:26.940 { 00:13:26.940 "name": "spare", 00:13:26.940 "uuid": "fa3b6099-f0bd-5312-9884-06e02ad2546c", 00:13:26.940 "is_configured": true, 00:13:26.940 "data_offset": 2048, 00:13:26.940 "data_size": 63488 00:13:26.940 }, 00:13:26.940 { 00:13:26.940 "name": "BaseBdev2", 00:13:26.940 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:26.940 "is_configured": true, 00:13:26.940 "data_offset": 2048, 00:13:26.940 "data_size": 63488 00:13:26.940 } 00:13:26.940 ] 00:13:26.940 }' 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.940 [2024-09-30 12:30:38.542160] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.940 [2024-09-30 12:30:38.615742] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:26.940 [2024-09-30 12:30:38.615867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.940 [2024-09-30 12:30:38.615904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.940 [2024-09-30 12:30:38.615929] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.940 "name": "raid_bdev1", 00:13:26.940 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:26.940 "strip_size_kb": 0, 00:13:26.940 "state": "online", 00:13:26.940 "raid_level": "raid1", 00:13:26.940 "superblock": true, 00:13:26.940 "num_base_bdevs": 2, 00:13:26.940 "num_base_bdevs_discovered": 1, 00:13:26.940 "num_base_bdevs_operational": 1, 00:13:26.940 "base_bdevs_list": [ 00:13:26.940 { 00:13:26.940 "name": null, 00:13:26.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.940 "is_configured": false, 00:13:26.940 "data_offset": 0, 00:13:26.940 "data_size": 63488 00:13:26.940 }, 00:13:26.940 { 00:13:26.940 "name": "BaseBdev2", 00:13:26.940 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:26.940 "is_configured": true, 00:13:26.940 "data_offset": 2048, 00:13:26.940 "data_size": 63488 00:13:26.940 } 00:13:26.940 ] 00:13:26.940 }' 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.940 12:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.213 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.213 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.213 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.213 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.213 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.478 "name": "raid_bdev1", 00:13:27.478 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:27.478 "strip_size_kb": 0, 00:13:27.478 "state": "online", 00:13:27.478 "raid_level": "raid1", 00:13:27.478 "superblock": true, 00:13:27.478 "num_base_bdevs": 2, 00:13:27.478 "num_base_bdevs_discovered": 1, 00:13:27.478 "num_base_bdevs_operational": 1, 00:13:27.478 "base_bdevs_list": [ 00:13:27.478 { 00:13:27.478 "name": null, 00:13:27.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.478 "is_configured": false, 00:13:27.478 "data_offset": 0, 00:13:27.478 "data_size": 63488 00:13:27.478 }, 00:13:27.478 { 00:13:27.478 "name": "BaseBdev2", 00:13:27.478 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:27.478 "is_configured": true, 00:13:27.478 "data_offset": 2048, 00:13:27.478 "data_size": 63488 00:13:27.478 } 00:13:27.478 ] 00:13:27.478 }' 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.478 [2024-09-30 12:30:39.252962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.478 [2024-09-30 12:30:39.268844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.478 12:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:27.478 [2024-09-30 12:30:39.270933] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.488 "name": "raid_bdev1", 00:13:28.488 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:28.488 "strip_size_kb": 0, 00:13:28.488 "state": "online", 00:13:28.488 "raid_level": "raid1", 00:13:28.488 "superblock": true, 00:13:28.488 "num_base_bdevs": 2, 00:13:28.488 "num_base_bdevs_discovered": 2, 00:13:28.488 "num_base_bdevs_operational": 2, 00:13:28.488 "process": { 00:13:28.488 "type": "rebuild", 00:13:28.488 "target": "spare", 00:13:28.488 "progress": { 00:13:28.488 "blocks": 20480, 00:13:28.488 "percent": 32 00:13:28.488 } 00:13:28.488 }, 00:13:28.488 "base_bdevs_list": [ 00:13:28.488 { 00:13:28.488 "name": "spare", 00:13:28.488 "uuid": "fa3b6099-f0bd-5312-9884-06e02ad2546c", 00:13:28.488 "is_configured": true, 00:13:28.488 "data_offset": 2048, 00:13:28.488 "data_size": 63488 00:13:28.488 }, 00:13:28.488 { 00:13:28.488 "name": "BaseBdev2", 00:13:28.488 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:28.488 "is_configured": true, 00:13:28.488 "data_offset": 2048, 00:13:28.488 "data_size": 63488 00:13:28.488 } 00:13:28.488 ] 00:13:28.488 }' 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.488 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:28.748 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=385 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.748 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.749 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.749 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.749 12:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.749 12:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.749 12:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.749 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.749 "name": "raid_bdev1", 00:13:28.749 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:28.749 "strip_size_kb": 0, 00:13:28.749 "state": "online", 00:13:28.749 "raid_level": "raid1", 00:13:28.749 "superblock": true, 00:13:28.749 "num_base_bdevs": 2, 00:13:28.749 "num_base_bdevs_discovered": 2, 00:13:28.749 "num_base_bdevs_operational": 2, 00:13:28.749 "process": { 00:13:28.749 "type": "rebuild", 00:13:28.749 "target": "spare", 00:13:28.749 "progress": { 00:13:28.749 "blocks": 22528, 00:13:28.749 "percent": 35 00:13:28.749 } 00:13:28.749 }, 00:13:28.749 "base_bdevs_list": [ 00:13:28.749 { 00:13:28.749 "name": "spare", 00:13:28.749 "uuid": "fa3b6099-f0bd-5312-9884-06e02ad2546c", 00:13:28.749 "is_configured": true, 00:13:28.749 "data_offset": 2048, 00:13:28.749 "data_size": 63488 00:13:28.749 }, 00:13:28.749 { 00:13:28.749 "name": "BaseBdev2", 00:13:28.749 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:28.749 "is_configured": true, 00:13:28.749 "data_offset": 2048, 00:13:28.749 "data_size": 63488 00:13:28.749 } 00:13:28.749 ] 00:13:28.749 }' 00:13:28.749 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.749 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.749 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.749 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.749 12:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:29.687 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.687 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.687 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.687 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.687 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.687 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.687 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.687 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.687 12:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.687 12:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.687 12:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.947 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.947 "name": "raid_bdev1", 00:13:29.947 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:29.947 "strip_size_kb": 0, 00:13:29.947 "state": "online", 00:13:29.947 "raid_level": "raid1", 00:13:29.947 "superblock": true, 00:13:29.947 "num_base_bdevs": 2, 00:13:29.947 "num_base_bdevs_discovered": 2, 00:13:29.947 "num_base_bdevs_operational": 2, 00:13:29.947 "process": { 00:13:29.947 "type": "rebuild", 00:13:29.947 "target": "spare", 00:13:29.947 "progress": { 00:13:29.947 "blocks": 45056, 00:13:29.947 "percent": 70 00:13:29.947 } 00:13:29.947 }, 00:13:29.947 "base_bdevs_list": [ 00:13:29.947 { 00:13:29.947 "name": "spare", 00:13:29.947 "uuid": "fa3b6099-f0bd-5312-9884-06e02ad2546c", 00:13:29.947 "is_configured": true, 00:13:29.947 "data_offset": 2048, 00:13:29.947 "data_size": 63488 00:13:29.947 }, 00:13:29.947 { 00:13:29.947 "name": "BaseBdev2", 00:13:29.947 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:29.947 "is_configured": true, 00:13:29.947 "data_offset": 2048, 00:13:29.947 "data_size": 63488 00:13:29.947 } 00:13:29.947 ] 00:13:29.947 }' 00:13:29.947 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.947 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.947 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.947 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.947 12:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:30.516 [2024-09-30 12:30:42.393162] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:30.516 [2024-09-30 12:30:42.393254] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:30.516 [2024-09-30 12:30:42.393363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.085 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.085 "name": "raid_bdev1", 00:13:31.085 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:31.085 "strip_size_kb": 0, 00:13:31.085 "state": "online", 00:13:31.085 "raid_level": "raid1", 00:13:31.085 "superblock": true, 00:13:31.085 "num_base_bdevs": 2, 00:13:31.085 "num_base_bdevs_discovered": 2, 00:13:31.085 "num_base_bdevs_operational": 2, 00:13:31.086 "base_bdevs_list": [ 00:13:31.086 { 00:13:31.086 "name": "spare", 00:13:31.086 "uuid": "fa3b6099-f0bd-5312-9884-06e02ad2546c", 00:13:31.086 "is_configured": true, 00:13:31.086 "data_offset": 2048, 00:13:31.086 "data_size": 63488 00:13:31.086 }, 00:13:31.086 { 00:13:31.086 "name": "BaseBdev2", 00:13:31.086 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:31.086 "is_configured": true, 00:13:31.086 "data_offset": 2048, 00:13:31.086 "data_size": 63488 00:13:31.086 } 00:13:31.086 ] 00:13:31.086 }' 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.086 "name": "raid_bdev1", 00:13:31.086 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:31.086 "strip_size_kb": 0, 00:13:31.086 "state": "online", 00:13:31.086 "raid_level": "raid1", 00:13:31.086 "superblock": true, 00:13:31.086 "num_base_bdevs": 2, 00:13:31.086 "num_base_bdevs_discovered": 2, 00:13:31.086 "num_base_bdevs_operational": 2, 00:13:31.086 "base_bdevs_list": [ 00:13:31.086 { 00:13:31.086 "name": "spare", 00:13:31.086 "uuid": "fa3b6099-f0bd-5312-9884-06e02ad2546c", 00:13:31.086 "is_configured": true, 00:13:31.086 "data_offset": 2048, 00:13:31.086 "data_size": 63488 00:13:31.086 }, 00:13:31.086 { 00:13:31.086 "name": "BaseBdev2", 00:13:31.086 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:31.086 "is_configured": true, 00:13:31.086 "data_offset": 2048, 00:13:31.086 "data_size": 63488 00:13:31.086 } 00:13:31.086 ] 00:13:31.086 }' 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.086 12:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.345 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.345 "name": "raid_bdev1", 00:13:31.345 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:31.345 "strip_size_kb": 0, 00:13:31.345 "state": "online", 00:13:31.345 "raid_level": "raid1", 00:13:31.345 "superblock": true, 00:13:31.345 "num_base_bdevs": 2, 00:13:31.345 "num_base_bdevs_discovered": 2, 00:13:31.345 "num_base_bdevs_operational": 2, 00:13:31.345 "base_bdevs_list": [ 00:13:31.345 { 00:13:31.345 "name": "spare", 00:13:31.345 "uuid": "fa3b6099-f0bd-5312-9884-06e02ad2546c", 00:13:31.345 "is_configured": true, 00:13:31.345 "data_offset": 2048, 00:13:31.345 "data_size": 63488 00:13:31.345 }, 00:13:31.345 { 00:13:31.345 "name": "BaseBdev2", 00:13:31.345 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:31.345 "is_configured": true, 00:13:31.345 "data_offset": 2048, 00:13:31.345 "data_size": 63488 00:13:31.345 } 00:13:31.345 ] 00:13:31.345 }' 00:13:31.345 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.345 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.605 [2024-09-30 12:30:43.401997] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.605 [2024-09-30 12:30:43.402092] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.605 [2024-09-30 12:30:43.402226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.605 [2024-09-30 12:30:43.402341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.605 [2024-09-30 12:30:43.402384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:31.605 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:31.865 /dev/nbd0 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.865 1+0 records in 00:13:31.865 1+0 records out 00:13:31.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178176 s, 23.0 MB/s 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:31.865 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:32.125 /dev/nbd1 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.125 1+0 records in 00:13:32.125 1+0 records out 00:13:32.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461118 s, 8.9 MB/s 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:32.125 12:30:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:32.384 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:32.384 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.384 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:32.384 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.384 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:32.384 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.384 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:32.643 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.643 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.643 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.643 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.643 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.643 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.643 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:32.643 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.643 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.643 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.903 [2024-09-30 12:30:44.579224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:32.903 [2024-09-30 12:30:44.579283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.903 [2024-09-30 12:30:44.579309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:32.903 [2024-09-30 12:30:44.579328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.903 [2024-09-30 12:30:44.581870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.903 [2024-09-30 12:30:44.581905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:32.903 [2024-09-30 12:30:44.582002] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:32.903 [2024-09-30 12:30:44.582060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.903 [2024-09-30 12:30:44.582209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.903 spare 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.903 [2024-09-30 12:30:44.682108] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:32.903 [2024-09-30 12:30:44.682139] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:32.903 [2024-09-30 12:30:44.682406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:32.903 [2024-09-30 12:30:44.682572] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:32.903 [2024-09-30 12:30:44.682582] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:32.903 [2024-09-30 12:30:44.682767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.903 "name": "raid_bdev1", 00:13:32.903 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:32.903 "strip_size_kb": 0, 00:13:32.903 "state": "online", 00:13:32.903 "raid_level": "raid1", 00:13:32.903 "superblock": true, 00:13:32.903 "num_base_bdevs": 2, 00:13:32.903 "num_base_bdevs_discovered": 2, 00:13:32.903 "num_base_bdevs_operational": 2, 00:13:32.903 "base_bdevs_list": [ 00:13:32.903 { 00:13:32.903 "name": "spare", 00:13:32.903 "uuid": "fa3b6099-f0bd-5312-9884-06e02ad2546c", 00:13:32.903 "is_configured": true, 00:13:32.903 "data_offset": 2048, 00:13:32.903 "data_size": 63488 00:13:32.903 }, 00:13:32.903 { 00:13:32.903 "name": "BaseBdev2", 00:13:32.903 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:32.903 "is_configured": true, 00:13:32.903 "data_offset": 2048, 00:13:32.903 "data_size": 63488 00:13:32.903 } 00:13:32.903 ] 00:13:32.903 }' 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.903 12:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.471 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.471 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.471 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.471 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.471 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.471 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.471 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.471 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.471 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.471 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.472 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.472 "name": "raid_bdev1", 00:13:33.472 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:33.472 "strip_size_kb": 0, 00:13:33.472 "state": "online", 00:13:33.472 "raid_level": "raid1", 00:13:33.472 "superblock": true, 00:13:33.472 "num_base_bdevs": 2, 00:13:33.472 "num_base_bdevs_discovered": 2, 00:13:33.472 "num_base_bdevs_operational": 2, 00:13:33.472 "base_bdevs_list": [ 00:13:33.472 { 00:13:33.472 "name": "spare", 00:13:33.472 "uuid": "fa3b6099-f0bd-5312-9884-06e02ad2546c", 00:13:33.472 "is_configured": true, 00:13:33.472 "data_offset": 2048, 00:13:33.472 "data_size": 63488 00:13:33.472 }, 00:13:33.472 { 00:13:33.472 "name": "BaseBdev2", 00:13:33.472 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:33.472 "is_configured": true, 00:13:33.472 "data_offset": 2048, 00:13:33.472 "data_size": 63488 00:13:33.472 } 00:13:33.472 ] 00:13:33.472 }' 00:13:33.472 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.472 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.472 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.472 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.472 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.472 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:33.472 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.472 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.472 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.731 [2024-09-30 12:30:45.385896] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.731 "name": "raid_bdev1", 00:13:33.731 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:33.731 "strip_size_kb": 0, 00:13:33.731 "state": "online", 00:13:33.731 "raid_level": "raid1", 00:13:33.731 "superblock": true, 00:13:33.731 "num_base_bdevs": 2, 00:13:33.731 "num_base_bdevs_discovered": 1, 00:13:33.731 "num_base_bdevs_operational": 1, 00:13:33.731 "base_bdevs_list": [ 00:13:33.731 { 00:13:33.731 "name": null, 00:13:33.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.731 "is_configured": false, 00:13:33.731 "data_offset": 0, 00:13:33.731 "data_size": 63488 00:13:33.731 }, 00:13:33.731 { 00:13:33.731 "name": "BaseBdev2", 00:13:33.731 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:33.731 "is_configured": true, 00:13:33.731 "data_offset": 2048, 00:13:33.731 "data_size": 63488 00:13:33.731 } 00:13:33.731 ] 00:13:33.731 }' 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.731 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.300 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:34.300 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.300 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.300 [2024-09-30 12:30:45.897021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.300 [2024-09-30 12:30:45.897213] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:34.300 [2024-09-30 12:30:45.897230] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:34.300 [2024-09-30 12:30:45.897265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.300 [2024-09-30 12:30:45.912749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:34.300 12:30:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.300 12:30:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:34.300 [2024-09-30 12:30:45.914930] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.238 "name": "raid_bdev1", 00:13:35.238 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:35.238 "strip_size_kb": 0, 00:13:35.238 "state": "online", 00:13:35.238 "raid_level": "raid1", 00:13:35.238 "superblock": true, 00:13:35.238 "num_base_bdevs": 2, 00:13:35.238 "num_base_bdevs_discovered": 2, 00:13:35.238 "num_base_bdevs_operational": 2, 00:13:35.238 "process": { 00:13:35.238 "type": "rebuild", 00:13:35.238 "target": "spare", 00:13:35.238 "progress": { 00:13:35.238 "blocks": 20480, 00:13:35.238 "percent": 32 00:13:35.238 } 00:13:35.238 }, 00:13:35.238 "base_bdevs_list": [ 00:13:35.238 { 00:13:35.238 "name": "spare", 00:13:35.238 "uuid": "fa3b6099-f0bd-5312-9884-06e02ad2546c", 00:13:35.238 "is_configured": true, 00:13:35.238 "data_offset": 2048, 00:13:35.238 "data_size": 63488 00:13:35.238 }, 00:13:35.238 { 00:13:35.238 "name": "BaseBdev2", 00:13:35.238 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:35.238 "is_configured": true, 00:13:35.238 "data_offset": 2048, 00:13:35.238 "data_size": 63488 00:13:35.238 } 00:13:35.238 ] 00:13:35.238 }' 00:13:35.238 12:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.238 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.238 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.238 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.238 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:35.238 12:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.238 12:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.238 [2024-09-30 12:30:47.077866] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.238 [2024-09-30 12:30:47.123461] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.238 [2024-09-30 12:30:47.123523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.238 [2024-09-30 12:30:47.123538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.238 [2024-09-30 12:30:47.123548] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.498 "name": "raid_bdev1", 00:13:35.498 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:35.498 "strip_size_kb": 0, 00:13:35.498 "state": "online", 00:13:35.498 "raid_level": "raid1", 00:13:35.498 "superblock": true, 00:13:35.498 "num_base_bdevs": 2, 00:13:35.498 "num_base_bdevs_discovered": 1, 00:13:35.498 "num_base_bdevs_operational": 1, 00:13:35.498 "base_bdevs_list": [ 00:13:35.498 { 00:13:35.498 "name": null, 00:13:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.498 "is_configured": false, 00:13:35.498 "data_offset": 0, 00:13:35.498 "data_size": 63488 00:13:35.498 }, 00:13:35.498 { 00:13:35.498 "name": "BaseBdev2", 00:13:35.498 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:35.498 "is_configured": true, 00:13:35.498 "data_offset": 2048, 00:13:35.498 "data_size": 63488 00:13:35.498 } 00:13:35.498 ] 00:13:35.498 }' 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.498 12:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.757 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.757 12:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.757 12:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.757 [2024-09-30 12:30:47.545069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.757 [2024-09-30 12:30:47.545198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.757 [2024-09-30 12:30:47.545242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:35.757 [2024-09-30 12:30:47.545312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.757 [2024-09-30 12:30:47.545913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.757 [2024-09-30 12:30:47.545987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.757 [2024-09-30 12:30:47.546116] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:35.757 [2024-09-30 12:30:47.546160] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:35.757 [2024-09-30 12:30:47.546204] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:35.757 [2024-09-30 12:30:47.546275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.757 [2024-09-30 12:30:47.561913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:35.757 spare 00:13:35.757 12:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.757 12:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:35.757 [2024-09-30 12:30:47.564081] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.695 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.695 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.695 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.695 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.695 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.695 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.695 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.695 12:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.695 12:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.954 "name": "raid_bdev1", 00:13:36.954 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:36.954 "strip_size_kb": 0, 00:13:36.954 "state": "online", 00:13:36.954 "raid_level": "raid1", 00:13:36.954 "superblock": true, 00:13:36.954 "num_base_bdevs": 2, 00:13:36.954 "num_base_bdevs_discovered": 2, 00:13:36.954 "num_base_bdevs_operational": 2, 00:13:36.954 "process": { 00:13:36.954 "type": "rebuild", 00:13:36.954 "target": "spare", 00:13:36.954 "progress": { 00:13:36.954 "blocks": 20480, 00:13:36.954 "percent": 32 00:13:36.954 } 00:13:36.954 }, 00:13:36.954 "base_bdevs_list": [ 00:13:36.954 { 00:13:36.954 "name": "spare", 00:13:36.954 "uuid": "fa3b6099-f0bd-5312-9884-06e02ad2546c", 00:13:36.954 "is_configured": true, 00:13:36.954 "data_offset": 2048, 00:13:36.954 "data_size": 63488 00:13:36.954 }, 00:13:36.954 { 00:13:36.954 "name": "BaseBdev2", 00:13:36.954 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:36.954 "is_configured": true, 00:13:36.954 "data_offset": 2048, 00:13:36.954 "data_size": 63488 00:13:36.954 } 00:13:36.954 ] 00:13:36.954 }' 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.954 [2024-09-30 12:30:48.728122] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.954 [2024-09-30 12:30:48.772829] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:36.954 [2024-09-30 12:30:48.772883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.954 [2024-09-30 12:30:48.772900] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.954 [2024-09-30 12:30:48.772907] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.954 "name": "raid_bdev1", 00:13:36.954 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:36.954 "strip_size_kb": 0, 00:13:36.954 "state": "online", 00:13:36.954 "raid_level": "raid1", 00:13:36.954 "superblock": true, 00:13:36.954 "num_base_bdevs": 2, 00:13:36.954 "num_base_bdevs_discovered": 1, 00:13:36.954 "num_base_bdevs_operational": 1, 00:13:36.954 "base_bdevs_list": [ 00:13:36.954 { 00:13:36.954 "name": null, 00:13:36.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.954 "is_configured": false, 00:13:36.954 "data_offset": 0, 00:13:36.954 "data_size": 63488 00:13:36.954 }, 00:13:36.954 { 00:13:36.954 "name": "BaseBdev2", 00:13:36.954 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:36.954 "is_configured": true, 00:13:36.954 "data_offset": 2048, 00:13:36.954 "data_size": 63488 00:13:36.954 } 00:13:36.954 ] 00:13:36.954 }' 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.954 12:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.523 "name": "raid_bdev1", 00:13:37.523 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:37.523 "strip_size_kb": 0, 00:13:37.523 "state": "online", 00:13:37.523 "raid_level": "raid1", 00:13:37.523 "superblock": true, 00:13:37.523 "num_base_bdevs": 2, 00:13:37.523 "num_base_bdevs_discovered": 1, 00:13:37.523 "num_base_bdevs_operational": 1, 00:13:37.523 "base_bdevs_list": [ 00:13:37.523 { 00:13:37.523 "name": null, 00:13:37.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.523 "is_configured": false, 00:13:37.523 "data_offset": 0, 00:13:37.523 "data_size": 63488 00:13:37.523 }, 00:13:37.523 { 00:13:37.523 "name": "BaseBdev2", 00:13:37.523 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:37.523 "is_configured": true, 00:13:37.523 "data_offset": 2048, 00:13:37.523 "data_size": 63488 00:13:37.523 } 00:13:37.523 ] 00:13:37.523 }' 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.523 [2024-09-30 12:30:49.382049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:37.523 [2024-09-30 12:30:49.382110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.523 [2024-09-30 12:30:49.382136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:37.523 [2024-09-30 12:30:49.382146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.523 [2024-09-30 12:30:49.382663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.523 [2024-09-30 12:30:49.382680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:37.523 [2024-09-30 12:30:49.382782] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:37.523 [2024-09-30 12:30:49.382797] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:37.523 [2024-09-30 12:30:49.382807] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:37.523 [2024-09-30 12:30:49.382818] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:37.523 BaseBdev1 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.523 12:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.902 "name": "raid_bdev1", 00:13:38.902 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:38.902 "strip_size_kb": 0, 00:13:38.902 "state": "online", 00:13:38.902 "raid_level": "raid1", 00:13:38.902 "superblock": true, 00:13:38.902 "num_base_bdevs": 2, 00:13:38.902 "num_base_bdevs_discovered": 1, 00:13:38.902 "num_base_bdevs_operational": 1, 00:13:38.902 "base_bdevs_list": [ 00:13:38.902 { 00:13:38.902 "name": null, 00:13:38.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.902 "is_configured": false, 00:13:38.902 "data_offset": 0, 00:13:38.902 "data_size": 63488 00:13:38.902 }, 00:13:38.902 { 00:13:38.902 "name": "BaseBdev2", 00:13:38.902 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:38.902 "is_configured": true, 00:13:38.902 "data_offset": 2048, 00:13:38.902 "data_size": 63488 00:13:38.902 } 00:13:38.902 ] 00:13:38.902 }' 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.902 12:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.162 "name": "raid_bdev1", 00:13:39.162 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:39.162 "strip_size_kb": 0, 00:13:39.162 "state": "online", 00:13:39.162 "raid_level": "raid1", 00:13:39.162 "superblock": true, 00:13:39.162 "num_base_bdevs": 2, 00:13:39.162 "num_base_bdevs_discovered": 1, 00:13:39.162 "num_base_bdevs_operational": 1, 00:13:39.162 "base_bdevs_list": [ 00:13:39.162 { 00:13:39.162 "name": null, 00:13:39.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.162 "is_configured": false, 00:13:39.162 "data_offset": 0, 00:13:39.162 "data_size": 63488 00:13:39.162 }, 00:13:39.162 { 00:13:39.162 "name": "BaseBdev2", 00:13:39.162 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:39.162 "is_configured": true, 00:13:39.162 "data_offset": 2048, 00:13:39.162 "data_size": 63488 00:13:39.162 } 00:13:39.162 ] 00:13:39.162 }' 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.162 12:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.162 [2024-09-30 12:30:51.019319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.162 [2024-09-30 12:30:51.019527] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:39.162 [2024-09-30 12:30:51.019544] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:39.162 request: 00:13:39.162 { 00:13:39.162 "base_bdev": "BaseBdev1", 00:13:39.162 "raid_bdev": "raid_bdev1", 00:13:39.162 "method": "bdev_raid_add_base_bdev", 00:13:39.162 "req_id": 1 00:13:39.162 } 00:13:39.162 Got JSON-RPC error response 00:13:39.162 response: 00:13:39.162 { 00:13:39.162 "code": -22, 00:13:39.162 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:39.162 } 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:39.162 12:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.628 "name": "raid_bdev1", 00:13:40.628 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:40.628 "strip_size_kb": 0, 00:13:40.628 "state": "online", 00:13:40.628 "raid_level": "raid1", 00:13:40.628 "superblock": true, 00:13:40.628 "num_base_bdevs": 2, 00:13:40.628 "num_base_bdevs_discovered": 1, 00:13:40.628 "num_base_bdevs_operational": 1, 00:13:40.628 "base_bdevs_list": [ 00:13:40.628 { 00:13:40.628 "name": null, 00:13:40.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.628 "is_configured": false, 00:13:40.628 "data_offset": 0, 00:13:40.628 "data_size": 63488 00:13:40.628 }, 00:13:40.628 { 00:13:40.628 "name": "BaseBdev2", 00:13:40.628 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:40.628 "is_configured": true, 00:13:40.628 "data_offset": 2048, 00:13:40.628 "data_size": 63488 00:13:40.628 } 00:13:40.628 ] 00:13:40.628 }' 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.628 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.888 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.888 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.888 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.888 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.888 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.888 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.888 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.888 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.889 "name": "raid_bdev1", 00:13:40.889 "uuid": "0edf72a7-20b2-443c-b90e-e783b72c3e2b", 00:13:40.889 "strip_size_kb": 0, 00:13:40.889 "state": "online", 00:13:40.889 "raid_level": "raid1", 00:13:40.889 "superblock": true, 00:13:40.889 "num_base_bdevs": 2, 00:13:40.889 "num_base_bdevs_discovered": 1, 00:13:40.889 "num_base_bdevs_operational": 1, 00:13:40.889 "base_bdevs_list": [ 00:13:40.889 { 00:13:40.889 "name": null, 00:13:40.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.889 "is_configured": false, 00:13:40.889 "data_offset": 0, 00:13:40.889 "data_size": 63488 00:13:40.889 }, 00:13:40.889 { 00:13:40.889 "name": "BaseBdev2", 00:13:40.889 "uuid": "f4c3f5f2-a1d1-5a2b-a3d1-01d6a4d8d653", 00:13:40.889 "is_configured": true, 00:13:40.889 "data_offset": 2048, 00:13:40.889 "data_size": 63488 00:13:40.889 } 00:13:40.889 ] 00:13:40.889 }' 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75605 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75605 ']' 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 75605 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75605 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:40.889 killing process with pid 75605 00:13:40.889 Received shutdown signal, test time was about 60.000000 seconds 00:13:40.889 00:13:40.889 Latency(us) 00:13:40.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.889 =================================================================================================================== 00:13:40.889 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75605' 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 75605 00:13:40.889 [2024-09-30 12:30:52.707677] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:40.889 12:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 75605 00:13:40.889 [2024-09-30 12:30:52.707862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.889 [2024-09-30 12:30:52.707931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.889 [2024-09-30 12:30:52.707946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:41.149 [2024-09-30 12:30:53.024488] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.529 12:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:42.529 00:13:42.529 real 0m23.220s 00:13:42.529 user 0m28.257s 00:13:42.529 sys 0m3.883s 00:13:42.529 12:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.529 12:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.529 ************************************ 00:13:42.529 END TEST raid_rebuild_test_sb 00:13:42.529 ************************************ 00:13:42.529 12:30:54 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:42.529 12:30:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:42.529 12:30:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.529 12:30:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.529 ************************************ 00:13:42.529 START TEST raid_rebuild_test_io 00:13:42.529 ************************************ 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76336 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76336 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76336 ']' 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.789 12:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.789 [2024-09-30 12:30:54.525558] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:42.789 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:42.789 Zero copy mechanism will not be used. 00:13:42.789 [2024-09-30 12:30:54.526229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76336 ] 00:13:42.789 [2024-09-30 12:30:54.674092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.049 [2024-09-30 12:30:54.915216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.309 [2024-09-30 12:30:55.146486] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.309 [2024-09-30 12:30:55.146587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.569 BaseBdev1_malloc 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.569 [2024-09-30 12:30:55.415933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:43.569 [2024-09-30 12:30:55.416089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.569 [2024-09-30 12:30:55.416149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:43.569 [2024-09-30 12:30:55.416191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.569 [2024-09-30 12:30:55.418559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.569 [2024-09-30 12:30:55.418632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:43.569 BaseBdev1 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.569 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.829 BaseBdev2_malloc 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.829 [2024-09-30 12:30:55.505220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:43.829 [2024-09-30 12:30:55.505355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.829 [2024-09-30 12:30:55.505392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:43.829 [2024-09-30 12:30:55.505429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.829 [2024-09-30 12:30:55.507859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.829 [2024-09-30 12:30:55.507936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:43.829 BaseBdev2 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.829 spare_malloc 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.829 spare_delay 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.829 [2024-09-30 12:30:55.578076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:43.829 [2024-09-30 12:30:55.578154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.829 [2024-09-30 12:30:55.578173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:43.829 [2024-09-30 12:30:55.578184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.829 [2024-09-30 12:30:55.580608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.829 [2024-09-30 12:30:55.580686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:43.829 spare 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.829 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.829 [2024-09-30 12:30:55.590100] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.829 [2024-09-30 12:30:55.592195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.829 [2024-09-30 12:30:55.592341] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:43.829 [2024-09-30 12:30:55.592374] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:43.830 [2024-09-30 12:30:55.592660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:43.830 [2024-09-30 12:30:55.592863] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:43.830 [2024-09-30 12:30:55.592876] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:43.830 [2024-09-30 12:30:55.593030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.830 "name": "raid_bdev1", 00:13:43.830 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:43.830 "strip_size_kb": 0, 00:13:43.830 "state": "online", 00:13:43.830 "raid_level": "raid1", 00:13:43.830 "superblock": false, 00:13:43.830 "num_base_bdevs": 2, 00:13:43.830 "num_base_bdevs_discovered": 2, 00:13:43.830 "num_base_bdevs_operational": 2, 00:13:43.830 "base_bdevs_list": [ 00:13:43.830 { 00:13:43.830 "name": "BaseBdev1", 00:13:43.830 "uuid": "7288256e-c76c-5f86-b7be-df1141ba8456", 00:13:43.830 "is_configured": true, 00:13:43.830 "data_offset": 0, 00:13:43.830 "data_size": 65536 00:13:43.830 }, 00:13:43.830 { 00:13:43.830 "name": "BaseBdev2", 00:13:43.830 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:43.830 "is_configured": true, 00:13:43.830 "data_offset": 0, 00:13:43.830 "data_size": 65536 00:13:43.830 } 00:13:43.830 ] 00:13:43.830 }' 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.830 12:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 [2024-09-30 12:30:56.069510] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 [2024-09-30 12:30:56.165070] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.399 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.400 "name": "raid_bdev1", 00:13:44.400 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:44.400 "strip_size_kb": 0, 00:13:44.400 "state": "online", 00:13:44.400 "raid_level": "raid1", 00:13:44.400 "superblock": false, 00:13:44.400 "num_base_bdevs": 2, 00:13:44.400 "num_base_bdevs_discovered": 1, 00:13:44.400 "num_base_bdevs_operational": 1, 00:13:44.400 "base_bdevs_list": [ 00:13:44.400 { 00:13:44.400 "name": null, 00:13:44.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.400 "is_configured": false, 00:13:44.400 "data_offset": 0, 00:13:44.400 "data_size": 65536 00:13:44.400 }, 00:13:44.400 { 00:13:44.400 "name": "BaseBdev2", 00:13:44.400 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:44.400 "is_configured": true, 00:13:44.400 "data_offset": 0, 00:13:44.400 "data_size": 65536 00:13:44.400 } 00:13:44.400 ] 00:13:44.400 }' 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.400 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.400 [2024-09-30 12:30:56.238126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:44.400 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:44.400 Zero copy mechanism will not be used. 00:13:44.400 Running I/O for 60 seconds... 00:13:44.969 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.969 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.969 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.969 [2024-09-30 12:30:56.618517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.969 12:30:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.969 12:30:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:44.969 [2024-09-30 12:30:56.681658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:44.969 [2024-09-30 12:30:56.683951] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.969 [2024-09-30 12:30:56.792199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:44.969 [2024-09-30 12:30:56.792941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:45.229 [2024-09-30 12:30:57.013183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:45.229 [2024-09-30 12:30:57.013603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:45.488 170.00 IOPS, 510.00 MiB/s [2024-09-30 12:30:57.340135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:45.488 [2024-09-30 12:30:57.340653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:45.748 [2024-09-30 12:30:57.556270] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:45.748 [2024-09-30 12:30:57.556678] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.007 "name": "raid_bdev1", 00:13:46.007 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:46.007 "strip_size_kb": 0, 00:13:46.007 "state": "online", 00:13:46.007 "raid_level": "raid1", 00:13:46.007 "superblock": false, 00:13:46.007 "num_base_bdevs": 2, 00:13:46.007 "num_base_bdevs_discovered": 2, 00:13:46.007 "num_base_bdevs_operational": 2, 00:13:46.007 "process": { 00:13:46.007 "type": "rebuild", 00:13:46.007 "target": "spare", 00:13:46.007 "progress": { 00:13:46.007 "blocks": 10240, 00:13:46.007 "percent": 15 00:13:46.007 } 00:13:46.007 }, 00:13:46.007 "base_bdevs_list": [ 00:13:46.007 { 00:13:46.007 "name": "spare", 00:13:46.007 "uuid": "3c819beb-031f-52fb-8475-7b1e5fafaa4c", 00:13:46.007 "is_configured": true, 00:13:46.007 "data_offset": 0, 00:13:46.007 "data_size": 65536 00:13:46.007 }, 00:13:46.007 { 00:13:46.007 "name": "BaseBdev2", 00:13:46.007 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:46.007 "is_configured": true, 00:13:46.007 "data_offset": 0, 00:13:46.007 "data_size": 65536 00:13:46.007 } 00:13:46.007 ] 00:13:46.007 }' 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.007 12:30:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.007 [2024-09-30 12:30:57.824586] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.267 [2024-09-30 12:30:57.988973] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:46.267 [2024-09-30 12:30:57.991955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.267 [2024-09-30 12:30:57.992063] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.267 [2024-09-30 12:30:57.992096] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:46.267 [2024-09-30 12:30:58.032703] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.267 "name": "raid_bdev1", 00:13:46.267 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:46.267 "strip_size_kb": 0, 00:13:46.267 "state": "online", 00:13:46.267 "raid_level": "raid1", 00:13:46.267 "superblock": false, 00:13:46.267 "num_base_bdevs": 2, 00:13:46.267 "num_base_bdevs_discovered": 1, 00:13:46.267 "num_base_bdevs_operational": 1, 00:13:46.267 "base_bdevs_list": [ 00:13:46.267 { 00:13:46.267 "name": null, 00:13:46.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.267 "is_configured": false, 00:13:46.267 "data_offset": 0, 00:13:46.267 "data_size": 65536 00:13:46.267 }, 00:13:46.267 { 00:13:46.267 "name": "BaseBdev2", 00:13:46.267 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:46.267 "is_configured": true, 00:13:46.267 "data_offset": 0, 00:13:46.267 "data_size": 65536 00:13:46.267 } 00:13:46.267 ] 00:13:46.267 }' 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.267 12:30:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.787 157.00 IOPS, 471.00 MiB/s 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.787 "name": "raid_bdev1", 00:13:46.787 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:46.787 "strip_size_kb": 0, 00:13:46.787 "state": "online", 00:13:46.787 "raid_level": "raid1", 00:13:46.787 "superblock": false, 00:13:46.787 "num_base_bdevs": 2, 00:13:46.787 "num_base_bdevs_discovered": 1, 00:13:46.787 "num_base_bdevs_operational": 1, 00:13:46.787 "base_bdevs_list": [ 00:13:46.787 { 00:13:46.787 "name": null, 00:13:46.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.787 "is_configured": false, 00:13:46.787 "data_offset": 0, 00:13:46.787 "data_size": 65536 00:13:46.787 }, 00:13:46.787 { 00:13:46.787 "name": "BaseBdev2", 00:13:46.787 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:46.787 "is_configured": true, 00:13:46.787 "data_offset": 0, 00:13:46.787 "data_size": 65536 00:13:46.787 } 00:13:46.787 ] 00:13:46.787 }' 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.787 [2024-09-30 12:30:58.611219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.787 12:30:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:46.787 [2024-09-30 12:30:58.667538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:46.787 [2024-09-30 12:30:58.669886] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.047 [2024-09-30 12:30:58.784238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:47.047 [2024-09-30 12:30:58.785047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:47.306 [2024-09-30 12:30:59.017465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:47.306 [2024-09-30 12:30:59.017990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:47.565 173.00 IOPS, 519.00 MiB/s [2024-09-30 12:30:59.395064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:47.565 [2024-09-30 12:30:59.395494] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:47.825 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.825 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.825 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.825 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.825 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.825 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.825 12:30:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.825 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.825 12:30:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.825 12:30:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.826 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.826 "name": "raid_bdev1", 00:13:47.826 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:47.826 "strip_size_kb": 0, 00:13:47.826 "state": "online", 00:13:47.826 "raid_level": "raid1", 00:13:47.826 "superblock": false, 00:13:47.826 "num_base_bdevs": 2, 00:13:47.826 "num_base_bdevs_discovered": 2, 00:13:47.826 "num_base_bdevs_operational": 2, 00:13:47.826 "process": { 00:13:47.826 "type": "rebuild", 00:13:47.826 "target": "spare", 00:13:47.826 "progress": { 00:13:47.826 "blocks": 12288, 00:13:47.826 "percent": 18 00:13:47.826 } 00:13:47.826 }, 00:13:47.826 "base_bdevs_list": [ 00:13:47.826 { 00:13:47.826 "name": "spare", 00:13:47.826 "uuid": "3c819beb-031f-52fb-8475-7b1e5fafaa4c", 00:13:47.826 "is_configured": true, 00:13:47.826 "data_offset": 0, 00:13:47.826 "data_size": 65536 00:13:47.826 }, 00:13:47.826 { 00:13:47.826 "name": "BaseBdev2", 00:13:47.826 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:47.826 "is_configured": true, 00:13:47.826 "data_offset": 0, 00:13:47.826 "data_size": 65536 00:13:47.826 } 00:13:47.826 ] 00:13:47.826 }' 00:13:47.826 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.086 [2024-09-30 12:30:59.725799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=404 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.086 "name": "raid_bdev1", 00:13:48.086 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:48.086 "strip_size_kb": 0, 00:13:48.086 "state": "online", 00:13:48.086 "raid_level": "raid1", 00:13:48.086 "superblock": false, 00:13:48.086 "num_base_bdevs": 2, 00:13:48.086 "num_base_bdevs_discovered": 2, 00:13:48.086 "num_base_bdevs_operational": 2, 00:13:48.086 "process": { 00:13:48.086 "type": "rebuild", 00:13:48.086 "target": "spare", 00:13:48.086 "progress": { 00:13:48.086 "blocks": 14336, 00:13:48.086 "percent": 21 00:13:48.086 } 00:13:48.086 }, 00:13:48.086 "base_bdevs_list": [ 00:13:48.086 { 00:13:48.086 "name": "spare", 00:13:48.086 "uuid": "3c819beb-031f-52fb-8475-7b1e5fafaa4c", 00:13:48.086 "is_configured": true, 00:13:48.086 "data_offset": 0, 00:13:48.086 "data_size": 65536 00:13:48.086 }, 00:13:48.086 { 00:13:48.086 "name": "BaseBdev2", 00:13:48.086 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:48.086 "is_configured": true, 00:13:48.086 "data_offset": 0, 00:13:48.086 "data_size": 65536 00:13:48.086 } 00:13:48.086 ] 00:13:48.086 }' 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.086 [2024-09-30 12:30:59.930194] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.086 12:30:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.607 156.50 IOPS, 469.50 MiB/s [2024-09-30 12:31:00.278611] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:48.867 [2024-09-30 12:31:00.519990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:48.867 [2024-09-30 12:31:00.520527] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:49.127 [2024-09-30 12:31:00.873432] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:49.127 12:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.127 12:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.127 12:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.127 12:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.127 12:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.127 12:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.127 12:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.127 12:31:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.127 12:31:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.127 12:31:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.127 12:31:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.387 12:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.387 "name": "raid_bdev1", 00:13:49.387 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:49.387 "strip_size_kb": 0, 00:13:49.387 "state": "online", 00:13:49.387 "raid_level": "raid1", 00:13:49.387 "superblock": false, 00:13:49.387 "num_base_bdevs": 2, 00:13:49.387 "num_base_bdevs_discovered": 2, 00:13:49.387 "num_base_bdevs_operational": 2, 00:13:49.387 "process": { 00:13:49.387 "type": "rebuild", 00:13:49.387 "target": "spare", 00:13:49.387 "progress": { 00:13:49.387 "blocks": 26624, 00:13:49.387 "percent": 40 00:13:49.387 } 00:13:49.387 }, 00:13:49.387 "base_bdevs_list": [ 00:13:49.387 { 00:13:49.387 "name": "spare", 00:13:49.387 "uuid": "3c819beb-031f-52fb-8475-7b1e5fafaa4c", 00:13:49.387 "is_configured": true, 00:13:49.387 "data_offset": 0, 00:13:49.387 "data_size": 65536 00:13:49.387 }, 00:13:49.387 { 00:13:49.387 "name": "BaseBdev2", 00:13:49.387 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:49.387 "is_configured": true, 00:13:49.387 "data_offset": 0, 00:13:49.387 "data_size": 65536 00:13:49.387 } 00:13:49.387 ] 00:13:49.387 }' 00:13:49.387 12:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.387 12:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.387 12:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.387 12:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.387 12:31:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:49.387 [2024-09-30 12:31:01.223887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:50.326 138.40 IOPS, 415.20 MiB/s [2024-09-30 12:31:02.051559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:50.326 [2024-09-30 12:31:02.051973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.326 "name": "raid_bdev1", 00:13:50.326 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:50.326 "strip_size_kb": 0, 00:13:50.326 "state": "online", 00:13:50.326 "raid_level": "raid1", 00:13:50.326 "superblock": false, 00:13:50.326 "num_base_bdevs": 2, 00:13:50.326 "num_base_bdevs_discovered": 2, 00:13:50.326 "num_base_bdevs_operational": 2, 00:13:50.326 "process": { 00:13:50.326 "type": "rebuild", 00:13:50.326 "target": "spare", 00:13:50.326 "progress": { 00:13:50.326 "blocks": 47104, 00:13:50.326 "percent": 71 00:13:50.326 } 00:13:50.326 }, 00:13:50.326 "base_bdevs_list": [ 00:13:50.326 { 00:13:50.326 "name": "spare", 00:13:50.326 "uuid": "3c819beb-031f-52fb-8475-7b1e5fafaa4c", 00:13:50.326 "is_configured": true, 00:13:50.326 "data_offset": 0, 00:13:50.326 "data_size": 65536 00:13:50.326 }, 00:13:50.326 { 00:13:50.326 "name": "BaseBdev2", 00:13:50.326 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:50.326 "is_configured": true, 00:13:50.326 "data_offset": 0, 00:13:50.326 "data_size": 65536 00:13:50.326 } 00:13:50.326 ] 00:13:50.326 }' 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.326 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.585 125.67 IOPS, 377.00 MiB/s 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.585 12:31:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.155 [2024-09-30 12:31:02.802291] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:51.415 [2024-09-30 12:31:03.131991] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:51.415 [2024-09-30 12:31:03.231806] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:51.415 [2024-09-30 12:31:03.234557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.415 113.71 IOPS, 341.14 MiB/s 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.415 "name": "raid_bdev1", 00:13:51.415 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:51.415 "strip_size_kb": 0, 00:13:51.415 "state": "online", 00:13:51.415 "raid_level": "raid1", 00:13:51.415 "superblock": false, 00:13:51.415 "num_base_bdevs": 2, 00:13:51.415 "num_base_bdevs_discovered": 2, 00:13:51.415 "num_base_bdevs_operational": 2, 00:13:51.415 "base_bdevs_list": [ 00:13:51.415 { 00:13:51.415 "name": "spare", 00:13:51.415 "uuid": "3c819beb-031f-52fb-8475-7b1e5fafaa4c", 00:13:51.415 "is_configured": true, 00:13:51.415 "data_offset": 0, 00:13:51.415 "data_size": 65536 00:13:51.415 }, 00:13:51.415 { 00:13:51.415 "name": "BaseBdev2", 00:13:51.415 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:51.415 "is_configured": true, 00:13:51.415 "data_offset": 0, 00:13:51.415 "data_size": 65536 00:13:51.415 } 00:13:51.415 ] 00:13:51.415 }' 00:13:51.415 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.675 "name": "raid_bdev1", 00:13:51.675 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:51.675 "strip_size_kb": 0, 00:13:51.675 "state": "online", 00:13:51.675 "raid_level": "raid1", 00:13:51.675 "superblock": false, 00:13:51.675 "num_base_bdevs": 2, 00:13:51.675 "num_base_bdevs_discovered": 2, 00:13:51.675 "num_base_bdevs_operational": 2, 00:13:51.675 "base_bdevs_list": [ 00:13:51.675 { 00:13:51.675 "name": "spare", 00:13:51.675 "uuid": "3c819beb-031f-52fb-8475-7b1e5fafaa4c", 00:13:51.675 "is_configured": true, 00:13:51.675 "data_offset": 0, 00:13:51.675 "data_size": 65536 00:13:51.675 }, 00:13:51.675 { 00:13:51.675 "name": "BaseBdev2", 00:13:51.675 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:51.675 "is_configured": true, 00:13:51.675 "data_offset": 0, 00:13:51.675 "data_size": 65536 00:13:51.675 } 00:13:51.675 ] 00:13:51.675 }' 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.675 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.676 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.676 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.676 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.676 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.676 "name": "raid_bdev1", 00:13:51.676 "uuid": "024978f7-69d7-45ef-9176-ba57ecad0de6", 00:13:51.676 "strip_size_kb": 0, 00:13:51.676 "state": "online", 00:13:51.676 "raid_level": "raid1", 00:13:51.676 "superblock": false, 00:13:51.676 "num_base_bdevs": 2, 00:13:51.676 "num_base_bdevs_discovered": 2, 00:13:51.676 "num_base_bdevs_operational": 2, 00:13:51.676 "base_bdevs_list": [ 00:13:51.676 { 00:13:51.676 "name": "spare", 00:13:51.676 "uuid": "3c819beb-031f-52fb-8475-7b1e5fafaa4c", 00:13:51.676 "is_configured": true, 00:13:51.676 "data_offset": 0, 00:13:51.676 "data_size": 65536 00:13:51.676 }, 00:13:51.676 { 00:13:51.676 "name": "BaseBdev2", 00:13:51.676 "uuid": "3ffa8c6a-e94c-5e30-ab85-7434df0f7c2b", 00:13:51.676 "is_configured": true, 00:13:51.676 "data_offset": 0, 00:13:51.676 "data_size": 65536 00:13:51.676 } 00:13:51.676 ] 00:13:51.676 }' 00:13:51.676 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.676 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.245 12:31:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:52.245 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.245 12:31:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.245 [2024-09-30 12:31:03.911626] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:52.245 [2024-09-30 12:31:03.911664] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.245 00:13:52.245 Latency(us) 00:13:52.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.245 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:52.245 raid_bdev1 : 7.75 106.14 318.42 0.00 0.00 12878.99 318.38 115389.15 00:13:52.245 =================================================================================================================== 00:13:52.245 Total : 106.14 318.42 0.00 0.00 12878.99 318.38 115389.15 00:13:52.245 { 00:13:52.245 "results": [ 00:13:52.245 { 00:13:52.245 "job": "raid_bdev1", 00:13:52.245 "core_mask": "0x1", 00:13:52.245 "workload": "randrw", 00:13:52.245 "percentage": 50, 00:13:52.245 "status": "finished", 00:13:52.245 "queue_depth": 2, 00:13:52.245 "io_size": 3145728, 00:13:52.245 "runtime": 7.753994, 00:13:52.245 "iops": 106.13884921757742, 00:13:52.245 "mibps": 318.4165476527322, 00:13:52.245 "io_failed": 0, 00:13:52.245 "io_timeout": 0, 00:13:52.245 "avg_latency_us": 12878.985685557684, 00:13:52.245 "min_latency_us": 318.37903930131006, 00:13:52.245 "max_latency_us": 115389.14934497817 00:13:52.245 } 00:13:52.245 ], 00:13:52.245 "core_count": 1 00:13:52.245 } 00:13:52.245 [2024-09-30 12:31:04.000047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.245 [2024-09-30 12:31:04.000090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.245 [2024-09-30 12:31:04.000171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.245 [2024-09-30 12:31:04.000182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:52.245 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:52.504 /dev/nbd0 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.504 1+0 records in 00:13:52.504 1+0 records out 00:13:52.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464205 s, 8.8 MB/s 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:52.504 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:52.505 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:52.505 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:52.505 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:52.505 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:52.505 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:52.505 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:52.505 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:52.764 /dev/nbd1 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.764 1+0 records in 00:13:52.764 1+0 records out 00:13:52.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519345 s, 7.9 MB/s 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:52.764 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.023 12:31:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76336 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76336 ']' 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76336 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76336 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76336' 00:13:53.283 killing process with pid 76336 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76336 00:13:53.283 Received shutdown signal, test time was about 8.933054 seconds 00:13:53.283 00:13:53.283 Latency(us) 00:13:53.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.283 =================================================================================================================== 00:13:53.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:53.283 [2024-09-30 12:31:05.156184] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.283 12:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76336 00:13:53.542 [2024-09-30 12:31:05.396579] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.920 12:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:54.920 00:13:54.920 real 0m12.346s 00:13:54.920 user 0m15.282s 00:13:54.920 sys 0m1.537s 00:13:54.920 12:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.920 12:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.920 ************************************ 00:13:54.920 END TEST raid_rebuild_test_io 00:13:54.920 ************************************ 00:13:55.180 12:31:06 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:55.180 12:31:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:55.180 12:31:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.180 12:31:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:55.180 ************************************ 00:13:55.180 START TEST raid_rebuild_test_sb_io 00:13:55.180 ************************************ 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76714 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76714 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 76714 ']' 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:55.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.180 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.181 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.181 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.181 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.181 12:31:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.181 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:55.181 Zero copy mechanism will not be used. 00:13:55.181 [2024-09-30 12:31:06.968938] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:55.181 [2024-09-30 12:31:06.969104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76714 ] 00:13:55.440 [2024-09-30 12:31:07.159673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.700 [2024-09-30 12:31:07.404469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.960 [2024-09-30 12:31:07.637412] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.960 [2024-09-30 12:31:07.637453] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.960 BaseBdev1_malloc 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.960 [2024-09-30 12:31:07.840591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:55.960 [2024-09-30 12:31:07.840686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.960 [2024-09-30 12:31:07.840713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:55.960 [2024-09-30 12:31:07.840729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.960 [2024-09-30 12:31:07.843118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.960 [2024-09-30 12:31:07.843227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.960 BaseBdev1 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.960 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.220 BaseBdev2_malloc 00:13:56.220 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.220 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:56.220 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.220 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.221 [2024-09-30 12:31:07.907720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:56.221 [2024-09-30 12:31:07.907874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.221 [2024-09-30 12:31:07.907901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:56.221 [2024-09-30 12:31:07.907913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.221 [2024-09-30 12:31:07.910250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.221 [2024-09-30 12:31:07.910285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:56.221 BaseBdev2 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.221 spare_malloc 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.221 spare_delay 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.221 [2024-09-30 12:31:07.981883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:56.221 [2024-09-30 12:31:07.982012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.221 [2024-09-30 12:31:07.982035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:56.221 [2024-09-30 12:31:07.982047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.221 [2024-09-30 12:31:07.984440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.221 [2024-09-30 12:31:07.984481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:56.221 spare 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.221 [2024-09-30 12:31:07.993920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.221 [2024-09-30 12:31:07.995940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.221 [2024-09-30 12:31:07.996164] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:56.221 [2024-09-30 12:31:07.996192] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:56.221 [2024-09-30 12:31:07.996452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:56.221 [2024-09-30 12:31:07.996619] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:56.221 [2024-09-30 12:31:07.996628] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:56.221 [2024-09-30 12:31:07.996800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.221 12:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.221 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.221 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.221 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.221 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.221 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.221 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.221 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.221 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.221 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.221 "name": "raid_bdev1", 00:13:56.221 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:13:56.221 "strip_size_kb": 0, 00:13:56.221 "state": "online", 00:13:56.221 "raid_level": "raid1", 00:13:56.221 "superblock": true, 00:13:56.221 "num_base_bdevs": 2, 00:13:56.221 "num_base_bdevs_discovered": 2, 00:13:56.221 "num_base_bdevs_operational": 2, 00:13:56.221 "base_bdevs_list": [ 00:13:56.221 { 00:13:56.221 "name": "BaseBdev1", 00:13:56.221 "uuid": "f0d6b03e-ce6b-548c-bec4-95e45a95c1a9", 00:13:56.221 "is_configured": true, 00:13:56.221 "data_offset": 2048, 00:13:56.221 "data_size": 63488 00:13:56.221 }, 00:13:56.221 { 00:13:56.221 "name": "BaseBdev2", 00:13:56.221 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:13:56.221 "is_configured": true, 00:13:56.221 "data_offset": 2048, 00:13:56.221 "data_size": 63488 00:13:56.221 } 00:13:56.221 ] 00:13:56.221 }' 00:13:56.221 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.221 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:56.791 [2024-09-30 12:31:08.429425] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:56.791 [2024-09-30 12:31:08.505009] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.791 "name": "raid_bdev1", 00:13:56.791 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:13:56.791 "strip_size_kb": 0, 00:13:56.791 "state": "online", 00:13:56.791 "raid_level": "raid1", 00:13:56.791 "superblock": true, 00:13:56.791 "num_base_bdevs": 2, 00:13:56.791 "num_base_bdevs_discovered": 1, 00:13:56.791 "num_base_bdevs_operational": 1, 00:13:56.791 "base_bdevs_list": [ 00:13:56.791 { 00:13:56.791 "name": null, 00:13:56.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.791 "is_configured": false, 00:13:56.791 "data_offset": 0, 00:13:56.791 "data_size": 63488 00:13:56.791 }, 00:13:56.791 { 00:13:56.791 "name": "BaseBdev2", 00:13:56.791 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:13:56.791 "is_configured": true, 00:13:56.791 "data_offset": 2048, 00:13:56.791 "data_size": 63488 00:13:56.791 } 00:13:56.791 ] 00:13:56.791 }' 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.791 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 [2024-09-30 12:31:08.585730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:56.791 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:56.791 Zero copy mechanism will not be used. 00:13:56.791 Running I/O for 60 seconds... 00:13:57.051 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:57.051 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.051 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.051 [2024-09-30 12:31:08.927012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.318 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.318 12:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:57.318 [2024-09-30 12:31:08.973000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:57.318 [2024-09-30 12:31:08.975113] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.318 [2024-09-30 12:31:09.087863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:57.318 [2024-09-30 12:31:09.088589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:57.579 [2024-09-30 12:31:09.321115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:57.579 [2024-09-30 12:31:09.321409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:57.839 222.00 IOPS, 666.00 MiB/s [2024-09-30 12:31:09.677941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:58.098 [2024-09-30 12:31:09.928927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:58.098 12:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.098 12:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.099 12:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.099 12:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.099 12:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.099 12:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.099 12:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.099 12:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.099 12:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.099 12:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.358 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.358 "name": "raid_bdev1", 00:13:58.358 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:13:58.358 "strip_size_kb": 0, 00:13:58.358 "state": "online", 00:13:58.358 "raid_level": "raid1", 00:13:58.358 "superblock": true, 00:13:58.358 "num_base_bdevs": 2, 00:13:58.358 "num_base_bdevs_discovered": 2, 00:13:58.358 "num_base_bdevs_operational": 2, 00:13:58.358 "process": { 00:13:58.358 "type": "rebuild", 00:13:58.358 "target": "spare", 00:13:58.358 "progress": { 00:13:58.358 "blocks": 14336, 00:13:58.358 "percent": 22 00:13:58.358 } 00:13:58.358 }, 00:13:58.358 "base_bdevs_list": [ 00:13:58.358 { 00:13:58.358 "name": "spare", 00:13:58.358 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:13:58.358 "is_configured": true, 00:13:58.358 "data_offset": 2048, 00:13:58.358 "data_size": 63488 00:13:58.358 }, 00:13:58.358 { 00:13:58.358 "name": "BaseBdev2", 00:13:58.358 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:13:58.358 "is_configured": true, 00:13:58.358 "data_offset": 2048, 00:13:58.358 "data_size": 63488 00:13:58.358 } 00:13:58.358 ] 00:13:58.358 }' 00:13:58.358 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.358 [2024-09-30 12:31:10.056381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:58.358 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.358 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.358 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.358 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:58.358 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.358 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.358 [2024-09-30 12:31:10.113270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.617 [2024-09-30 12:31:10.273275] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:58.617 [2024-09-30 12:31:10.275662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.617 [2024-09-30 12:31:10.275771] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.617 [2024-09-30 12:31:10.275788] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:58.617 [2024-09-30 12:31:10.305373] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.617 "name": "raid_bdev1", 00:13:58.617 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:13:58.617 "strip_size_kb": 0, 00:13:58.617 "state": "online", 00:13:58.617 "raid_level": "raid1", 00:13:58.617 "superblock": true, 00:13:58.617 "num_base_bdevs": 2, 00:13:58.617 "num_base_bdevs_discovered": 1, 00:13:58.617 "num_base_bdevs_operational": 1, 00:13:58.617 "base_bdevs_list": [ 00:13:58.617 { 00:13:58.617 "name": null, 00:13:58.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.617 "is_configured": false, 00:13:58.617 "data_offset": 0, 00:13:58.617 "data_size": 63488 00:13:58.617 }, 00:13:58.617 { 00:13:58.617 "name": "BaseBdev2", 00:13:58.617 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:13:58.617 "is_configured": true, 00:13:58.617 "data_offset": 2048, 00:13:58.617 "data_size": 63488 00:13:58.617 } 00:13:58.617 ] 00:13:58.617 }' 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.617 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.876 184.00 IOPS, 552.00 MiB/s 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.876 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.876 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.876 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.876 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.876 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.876 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.876 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.876 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.876 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.876 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.876 "name": "raid_bdev1", 00:13:58.876 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:13:58.876 "strip_size_kb": 0, 00:13:58.876 "state": "online", 00:13:58.876 "raid_level": "raid1", 00:13:58.876 "superblock": true, 00:13:58.876 "num_base_bdevs": 2, 00:13:58.876 "num_base_bdevs_discovered": 1, 00:13:58.876 "num_base_bdevs_operational": 1, 00:13:58.876 "base_bdevs_list": [ 00:13:58.876 { 00:13:58.876 "name": null, 00:13:58.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.876 "is_configured": false, 00:13:58.876 "data_offset": 0, 00:13:58.876 "data_size": 63488 00:13:58.876 }, 00:13:58.876 { 00:13:58.876 "name": "BaseBdev2", 00:13:58.876 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:13:58.876 "is_configured": true, 00:13:58.876 "data_offset": 2048, 00:13:58.876 "data_size": 63488 00:13:58.876 } 00:13:58.876 ] 00:13:58.876 }' 00:13:58.876 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.136 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.136 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.136 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.136 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:59.136 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.136 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.136 [2024-09-30 12:31:10.814997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.136 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.136 12:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:59.136 [2024-09-30 12:31:10.864275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:59.136 [2024-09-30 12:31:10.866493] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.136 [2024-09-30 12:31:10.978758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:59.395 [2024-09-30 12:31:11.191994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:59.395 [2024-09-30 12:31:11.192374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:59.963 [2024-09-30 12:31:11.552149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:59.963 175.67 IOPS, 527.00 MiB/s [2024-09-30 12:31:11.772898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:59.963 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.963 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.963 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.963 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.963 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.229 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.230 "name": "raid_bdev1", 00:14:00.230 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:00.230 "strip_size_kb": 0, 00:14:00.230 "state": "online", 00:14:00.230 "raid_level": "raid1", 00:14:00.230 "superblock": true, 00:14:00.230 "num_base_bdevs": 2, 00:14:00.230 "num_base_bdevs_discovered": 2, 00:14:00.230 "num_base_bdevs_operational": 2, 00:14:00.230 "process": { 00:14:00.230 "type": "rebuild", 00:14:00.230 "target": "spare", 00:14:00.230 "progress": { 00:14:00.230 "blocks": 10240, 00:14:00.230 "percent": 16 00:14:00.230 } 00:14:00.230 }, 00:14:00.230 "base_bdevs_list": [ 00:14:00.230 { 00:14:00.230 "name": "spare", 00:14:00.230 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:14:00.230 "is_configured": true, 00:14:00.230 "data_offset": 2048, 00:14:00.230 "data_size": 63488 00:14:00.230 }, 00:14:00.230 { 00:14:00.230 "name": "BaseBdev2", 00:14:00.230 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:00.230 "is_configured": true, 00:14:00.230 "data_offset": 2048, 00:14:00.230 "data_size": 63488 00:14:00.230 } 00:14:00.230 ] 00:14:00.230 }' 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:00.230 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=416 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.230 12:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.230 12:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.230 "name": "raid_bdev1", 00:14:00.230 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:00.230 "strip_size_kb": 0, 00:14:00.230 "state": "online", 00:14:00.230 "raid_level": "raid1", 00:14:00.230 "superblock": true, 00:14:00.230 "num_base_bdevs": 2, 00:14:00.230 "num_base_bdevs_discovered": 2, 00:14:00.230 "num_base_bdevs_operational": 2, 00:14:00.230 "process": { 00:14:00.230 "type": "rebuild", 00:14:00.230 "target": "spare", 00:14:00.230 "progress": { 00:14:00.230 "blocks": 12288, 00:14:00.230 "percent": 19 00:14:00.230 } 00:14:00.230 }, 00:14:00.230 "base_bdevs_list": [ 00:14:00.230 { 00:14:00.230 "name": "spare", 00:14:00.230 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:14:00.230 "is_configured": true, 00:14:00.230 "data_offset": 2048, 00:14:00.230 "data_size": 63488 00:14:00.230 }, 00:14:00.230 { 00:14:00.230 "name": "BaseBdev2", 00:14:00.230 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:00.230 "is_configured": true, 00:14:00.230 "data_offset": 2048, 00:14:00.230 "data_size": 63488 00:14:00.230 } 00:14:00.230 ] 00:14:00.230 }' 00:14:00.230 12:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.230 12:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.230 12:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.501 12:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.501 12:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:00.501 [2024-09-30 12:31:12.146424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:00.501 [2024-09-30 12:31:12.380659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:01.028 152.00 IOPS, 456.00 MiB/s [2024-09-30 12:31:12.734349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:01.287 [2024-09-30 12:31:12.945405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:01.287 [2024-09-30 12:31:12.945987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:01.287 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.287 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.287 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.287 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.287 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.287 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.287 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.287 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.287 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.287 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.287 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.546 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.546 "name": "raid_bdev1", 00:14:01.546 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:01.546 "strip_size_kb": 0, 00:14:01.546 "state": "online", 00:14:01.546 "raid_level": "raid1", 00:14:01.546 "superblock": true, 00:14:01.546 "num_base_bdevs": 2, 00:14:01.546 "num_base_bdevs_discovered": 2, 00:14:01.546 "num_base_bdevs_operational": 2, 00:14:01.546 "process": { 00:14:01.546 "type": "rebuild", 00:14:01.546 "target": "spare", 00:14:01.546 "progress": { 00:14:01.546 "blocks": 28672, 00:14:01.546 "percent": 45 00:14:01.546 } 00:14:01.546 }, 00:14:01.546 "base_bdevs_list": [ 00:14:01.546 { 00:14:01.546 "name": "spare", 00:14:01.546 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:14:01.546 "is_configured": true, 00:14:01.546 "data_offset": 2048, 00:14:01.546 "data_size": 63488 00:14:01.546 }, 00:14:01.546 { 00:14:01.546 "name": "BaseBdev2", 00:14:01.546 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:01.546 "is_configured": true, 00:14:01.546 "data_offset": 2048, 00:14:01.546 "data_size": 63488 00:14:01.546 } 00:14:01.546 ] 00:14:01.546 }' 00:14:01.546 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.546 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.546 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.546 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.546 12:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.804 129.80 IOPS, 389.40 MiB/s [2024-09-30 12:31:13.623694] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:02.062 [2024-09-30 12:31:13.732407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:02.063 [2024-09-30 12:31:13.956978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:02.320 [2024-09-30 12:31:14.161154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:02.320 [2024-09-30 12:31:14.161635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.579 "name": "raid_bdev1", 00:14:02.579 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:02.579 "strip_size_kb": 0, 00:14:02.579 "state": "online", 00:14:02.579 "raid_level": "raid1", 00:14:02.579 "superblock": true, 00:14:02.579 "num_base_bdevs": 2, 00:14:02.579 "num_base_bdevs_discovered": 2, 00:14:02.579 "num_base_bdevs_operational": 2, 00:14:02.579 "process": { 00:14:02.579 "type": "rebuild", 00:14:02.579 "target": "spare", 00:14:02.579 "progress": { 00:14:02.579 "blocks": 47104, 00:14:02.579 "percent": 74 00:14:02.579 } 00:14:02.579 }, 00:14:02.579 "base_bdevs_list": [ 00:14:02.579 { 00:14:02.579 "name": "spare", 00:14:02.579 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:14:02.579 "is_configured": true, 00:14:02.579 "data_offset": 2048, 00:14:02.579 "data_size": 63488 00:14:02.579 }, 00:14:02.579 { 00:14:02.579 "name": "BaseBdev2", 00:14:02.579 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:02.579 "is_configured": true, 00:14:02.579 "data_offset": 2048, 00:14:02.579 "data_size": 63488 00:14:02.579 } 00:14:02.579 ] 00:14:02.579 }' 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.579 12:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:03.405 115.67 IOPS, 347.00 MiB/s [2024-09-30 12:31:15.124718] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:03.405 [2024-09-30 12:31:15.224576] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:03.405 [2024-09-30 12:31:15.227533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.664 "name": "raid_bdev1", 00:14:03.664 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:03.664 "strip_size_kb": 0, 00:14:03.664 "state": "online", 00:14:03.664 "raid_level": "raid1", 00:14:03.664 "superblock": true, 00:14:03.664 "num_base_bdevs": 2, 00:14:03.664 "num_base_bdevs_discovered": 2, 00:14:03.664 "num_base_bdevs_operational": 2, 00:14:03.664 "base_bdevs_list": [ 00:14:03.664 { 00:14:03.664 "name": "spare", 00:14:03.664 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:14:03.664 "is_configured": true, 00:14:03.664 "data_offset": 2048, 00:14:03.664 "data_size": 63488 00:14:03.664 }, 00:14:03.664 { 00:14:03.664 "name": "BaseBdev2", 00:14:03.664 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:03.664 "is_configured": true, 00:14:03.664 "data_offset": 2048, 00:14:03.664 "data_size": 63488 00:14:03.664 } 00:14:03.664 ] 00:14:03.664 }' 00:14:03.664 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.665 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:03.665 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.665 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:03.665 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:03.665 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.665 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.665 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.665 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.665 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.924 104.29 IOPS, 312.86 MiB/s 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.924 "name": "raid_bdev1", 00:14:03.924 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:03.924 "strip_size_kb": 0, 00:14:03.924 "state": "online", 00:14:03.924 "raid_level": "raid1", 00:14:03.924 "superblock": true, 00:14:03.924 "num_base_bdevs": 2, 00:14:03.924 "num_base_bdevs_discovered": 2, 00:14:03.924 "num_base_bdevs_operational": 2, 00:14:03.924 "base_bdevs_list": [ 00:14:03.924 { 00:14:03.924 "name": "spare", 00:14:03.924 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:14:03.924 "is_configured": true, 00:14:03.924 "data_offset": 2048, 00:14:03.924 "data_size": 63488 00:14:03.924 }, 00:14:03.924 { 00:14:03.924 "name": "BaseBdev2", 00:14:03.924 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:03.924 "is_configured": true, 00:14:03.924 "data_offset": 2048, 00:14:03.924 "data_size": 63488 00:14:03.924 } 00:14:03.924 ] 00:14:03.924 }' 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.924 "name": "raid_bdev1", 00:14:03.924 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:03.924 "strip_size_kb": 0, 00:14:03.924 "state": "online", 00:14:03.924 "raid_level": "raid1", 00:14:03.924 "superblock": true, 00:14:03.924 "num_base_bdevs": 2, 00:14:03.924 "num_base_bdevs_discovered": 2, 00:14:03.924 "num_base_bdevs_operational": 2, 00:14:03.924 "base_bdevs_list": [ 00:14:03.924 { 00:14:03.924 "name": "spare", 00:14:03.924 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:14:03.924 "is_configured": true, 00:14:03.924 "data_offset": 2048, 00:14:03.924 "data_size": 63488 00:14:03.924 }, 00:14:03.924 { 00:14:03.924 "name": "BaseBdev2", 00:14:03.924 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:03.924 "is_configured": true, 00:14:03.924 "data_offset": 2048, 00:14:03.924 "data_size": 63488 00:14:03.924 } 00:14:03.924 ] 00:14:03.924 }' 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.924 12:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.493 [2024-09-30 12:31:16.121186] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.493 [2024-09-30 12:31:16.121307] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.493 00:14:04.493 Latency(us) 00:14:04.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.493 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:04.493 raid_bdev1 : 7.62 98.38 295.13 0.00 0.00 14103.64 293.34 112641.79 00:14:04.493 =================================================================================================================== 00:14:04.493 Total : 98.38 295.13 0.00 0.00 14103.64 293.34 112641.79 00:14:04.493 [2024-09-30 12:31:16.218145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.493 [2024-09-30 12:31:16.218193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.493 [2024-09-30 12:31:16.218278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.493 [2024-09-30 12:31:16.218288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:04.493 { 00:14:04.493 "results": [ 00:14:04.493 { 00:14:04.493 "job": "raid_bdev1", 00:14:04.493 "core_mask": "0x1", 00:14:04.493 "workload": "randrw", 00:14:04.493 "percentage": 50, 00:14:04.493 "status": "finished", 00:14:04.493 "queue_depth": 2, 00:14:04.493 "io_size": 3145728, 00:14:04.493 "runtime": 7.623684, 00:14:04.493 "iops": 98.37763474981386, 00:14:04.493 "mibps": 295.1329042494416, 00:14:04.493 "io_failed": 0, 00:14:04.493 "io_timeout": 0, 00:14:04.493 "avg_latency_us": 14103.642922852983, 00:14:04.493 "min_latency_us": 293.3379912663755, 00:14:04.493 "max_latency_us": 112641.78864628822 00:14:04.493 } 00:14:04.493 ], 00:14:04.493 "core_count": 1 00:14:04.493 } 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:04.493 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:04.494 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:04.494 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.494 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:04.494 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:04.494 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:04.494 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:04.494 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:04.494 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:04.494 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.494 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:04.753 /dev/nbd0 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.753 1+0 records in 00:14:04.753 1+0 records out 00:14:04.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345484 s, 11.9 MB/s 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:04.753 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:04.754 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:04.754 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:04.754 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:04.754 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.754 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:05.013 /dev/nbd1 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.013 1+0 records in 00:14:05.013 1+0 records out 00:14:05.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374768 s, 10.9 MB/s 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.013 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.014 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:05.273 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:05.273 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.273 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:05.273 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:05.273 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:05.273 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.273 12:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.532 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.792 [2024-09-30 12:31:17.431636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.792 [2024-09-30 12:31:17.431772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.792 [2024-09-30 12:31:17.431834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:05.792 [2024-09-30 12:31:17.431873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.792 [2024-09-30 12:31:17.434414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.792 [2024-09-30 12:31:17.434492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.792 [2024-09-30 12:31:17.434612] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:05.792 [2024-09-30 12:31:17.434708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.792 [2024-09-30 12:31:17.434941] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.792 spare 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.792 [2024-09-30 12:31:17.534878] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:05.792 [2024-09-30 12:31:17.534947] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:05.792 [2024-09-30 12:31:17.535278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:05.792 [2024-09-30 12:31:17.535508] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:05.792 [2024-09-30 12:31:17.535550] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:05.792 [2024-09-30 12:31:17.535776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.792 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.792 "name": "raid_bdev1", 00:14:05.792 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:05.792 "strip_size_kb": 0, 00:14:05.792 "state": "online", 00:14:05.792 "raid_level": "raid1", 00:14:05.792 "superblock": true, 00:14:05.792 "num_base_bdevs": 2, 00:14:05.792 "num_base_bdevs_discovered": 2, 00:14:05.792 "num_base_bdevs_operational": 2, 00:14:05.792 "base_bdevs_list": [ 00:14:05.792 { 00:14:05.792 "name": "spare", 00:14:05.792 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:14:05.792 "is_configured": true, 00:14:05.792 "data_offset": 2048, 00:14:05.792 "data_size": 63488 00:14:05.792 }, 00:14:05.792 { 00:14:05.792 "name": "BaseBdev2", 00:14:05.792 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:05.792 "is_configured": true, 00:14:05.792 "data_offset": 2048, 00:14:05.792 "data_size": 63488 00:14:05.792 } 00:14:05.792 ] 00:14:05.792 }' 00:14:05.793 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.793 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.052 "name": "raid_bdev1", 00:14:06.052 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:06.052 "strip_size_kb": 0, 00:14:06.052 "state": "online", 00:14:06.052 "raid_level": "raid1", 00:14:06.052 "superblock": true, 00:14:06.052 "num_base_bdevs": 2, 00:14:06.052 "num_base_bdevs_discovered": 2, 00:14:06.052 "num_base_bdevs_operational": 2, 00:14:06.052 "base_bdevs_list": [ 00:14:06.052 { 00:14:06.052 "name": "spare", 00:14:06.052 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:14:06.052 "is_configured": true, 00:14:06.052 "data_offset": 2048, 00:14:06.052 "data_size": 63488 00:14:06.052 }, 00:14:06.052 { 00:14:06.052 "name": "BaseBdev2", 00:14:06.052 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:06.052 "is_configured": true, 00:14:06.052 "data_offset": 2048, 00:14:06.052 "data_size": 63488 00:14:06.052 } 00:14:06.052 ] 00:14:06.052 }' 00:14:06.052 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.311 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.311 12:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.311 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.312 [2024-09-30 12:31:18.063060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.312 "name": "raid_bdev1", 00:14:06.312 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:06.312 "strip_size_kb": 0, 00:14:06.312 "state": "online", 00:14:06.312 "raid_level": "raid1", 00:14:06.312 "superblock": true, 00:14:06.312 "num_base_bdevs": 2, 00:14:06.312 "num_base_bdevs_discovered": 1, 00:14:06.312 "num_base_bdevs_operational": 1, 00:14:06.312 "base_bdevs_list": [ 00:14:06.312 { 00:14:06.312 "name": null, 00:14:06.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.312 "is_configured": false, 00:14:06.312 "data_offset": 0, 00:14:06.312 "data_size": 63488 00:14:06.312 }, 00:14:06.312 { 00:14:06.312 "name": "BaseBdev2", 00:14:06.312 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:06.312 "is_configured": true, 00:14:06.312 "data_offset": 2048, 00:14:06.312 "data_size": 63488 00:14:06.312 } 00:14:06.312 ] 00:14:06.312 }' 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.312 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.881 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.881 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.881 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.881 [2024-09-30 12:31:18.518354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.881 [2024-09-30 12:31:18.518526] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:06.881 [2024-09-30 12:31:18.518542] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:06.881 [2024-09-30 12:31:18.518608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.881 [2024-09-30 12:31:18.535085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:06.881 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.881 12:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:06.881 [2024-09-30 12:31:18.537250] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.820 "name": "raid_bdev1", 00:14:07.820 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:07.820 "strip_size_kb": 0, 00:14:07.820 "state": "online", 00:14:07.820 "raid_level": "raid1", 00:14:07.820 "superblock": true, 00:14:07.820 "num_base_bdevs": 2, 00:14:07.820 "num_base_bdevs_discovered": 2, 00:14:07.820 "num_base_bdevs_operational": 2, 00:14:07.820 "process": { 00:14:07.820 "type": "rebuild", 00:14:07.820 "target": "spare", 00:14:07.820 "progress": { 00:14:07.820 "blocks": 20480, 00:14:07.820 "percent": 32 00:14:07.820 } 00:14:07.820 }, 00:14:07.820 "base_bdevs_list": [ 00:14:07.820 { 00:14:07.820 "name": "spare", 00:14:07.820 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:14:07.820 "is_configured": true, 00:14:07.820 "data_offset": 2048, 00:14:07.820 "data_size": 63488 00:14:07.820 }, 00:14:07.820 { 00:14:07.820 "name": "BaseBdev2", 00:14:07.820 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:07.820 "is_configured": true, 00:14:07.820 "data_offset": 2048, 00:14:07.820 "data_size": 63488 00:14:07.820 } 00:14:07.820 ] 00:14:07.820 }' 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.820 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.820 [2024-09-30 12:31:19.696356] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.080 [2024-09-30 12:31:19.745869] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:08.080 [2024-09-30 12:31:19.745932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.080 [2024-09-30 12:31:19.745946] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.080 [2024-09-30 12:31:19.745957] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.080 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.080 "name": "raid_bdev1", 00:14:08.080 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:08.080 "strip_size_kb": 0, 00:14:08.080 "state": "online", 00:14:08.080 "raid_level": "raid1", 00:14:08.080 "superblock": true, 00:14:08.080 "num_base_bdevs": 2, 00:14:08.080 "num_base_bdevs_discovered": 1, 00:14:08.080 "num_base_bdevs_operational": 1, 00:14:08.080 "base_bdevs_list": [ 00:14:08.080 { 00:14:08.080 "name": null, 00:14:08.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.080 "is_configured": false, 00:14:08.080 "data_offset": 0, 00:14:08.080 "data_size": 63488 00:14:08.080 }, 00:14:08.080 { 00:14:08.080 "name": "BaseBdev2", 00:14:08.080 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:08.080 "is_configured": true, 00:14:08.081 "data_offset": 2048, 00:14:08.081 "data_size": 63488 00:14:08.081 } 00:14:08.081 ] 00:14:08.081 }' 00:14:08.081 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.081 12:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.650 12:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:08.650 12:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.650 12:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.650 [2024-09-30 12:31:20.242468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:08.650 [2024-09-30 12:31:20.242597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.650 [2024-09-30 12:31:20.242651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:08.650 [2024-09-30 12:31:20.242690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.650 [2024-09-30 12:31:20.243287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.650 [2024-09-30 12:31:20.243366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:08.650 [2024-09-30 12:31:20.243492] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:08.650 [2024-09-30 12:31:20.243536] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:08.650 [2024-09-30 12:31:20.243580] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:08.650 [2024-09-30 12:31:20.243652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.650 [2024-09-30 12:31:20.259019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:08.650 spare 00:14:08.650 12:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.651 12:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:08.651 [2024-09-30 12:31:20.261226] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.590 "name": "raid_bdev1", 00:14:09.590 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:09.590 "strip_size_kb": 0, 00:14:09.590 "state": "online", 00:14:09.590 "raid_level": "raid1", 00:14:09.590 "superblock": true, 00:14:09.590 "num_base_bdevs": 2, 00:14:09.590 "num_base_bdevs_discovered": 2, 00:14:09.590 "num_base_bdevs_operational": 2, 00:14:09.590 "process": { 00:14:09.590 "type": "rebuild", 00:14:09.590 "target": "spare", 00:14:09.590 "progress": { 00:14:09.590 "blocks": 20480, 00:14:09.590 "percent": 32 00:14:09.590 } 00:14:09.590 }, 00:14:09.590 "base_bdevs_list": [ 00:14:09.590 { 00:14:09.590 "name": "spare", 00:14:09.590 "uuid": "b21535e1-5fec-5a85-9a86-f59a3f8f177d", 00:14:09.590 "is_configured": true, 00:14:09.590 "data_offset": 2048, 00:14:09.590 "data_size": 63488 00:14:09.590 }, 00:14:09.590 { 00:14:09.590 "name": "BaseBdev2", 00:14:09.590 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:09.590 "is_configured": true, 00:14:09.590 "data_offset": 2048, 00:14:09.590 "data_size": 63488 00:14:09.590 } 00:14:09.590 ] 00:14:09.590 }' 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.590 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.590 [2024-09-30 12:31:21.428288] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.590 [2024-09-30 12:31:21.469839] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:09.590 [2024-09-30 12:31:21.469957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.590 [2024-09-30 12:31:21.469979] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.590 [2024-09-30 12:31:21.469987] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.850 "name": "raid_bdev1", 00:14:09.850 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:09.850 "strip_size_kb": 0, 00:14:09.850 "state": "online", 00:14:09.850 "raid_level": "raid1", 00:14:09.850 "superblock": true, 00:14:09.850 "num_base_bdevs": 2, 00:14:09.850 "num_base_bdevs_discovered": 1, 00:14:09.850 "num_base_bdevs_operational": 1, 00:14:09.850 "base_bdevs_list": [ 00:14:09.850 { 00:14:09.850 "name": null, 00:14:09.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.850 "is_configured": false, 00:14:09.850 "data_offset": 0, 00:14:09.850 "data_size": 63488 00:14:09.850 }, 00:14:09.850 { 00:14:09.850 "name": "BaseBdev2", 00:14:09.850 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:09.850 "is_configured": true, 00:14:09.850 "data_offset": 2048, 00:14:09.850 "data_size": 63488 00:14:09.850 } 00:14:09.850 ] 00:14:09.850 }' 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.850 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.111 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.111 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.111 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.111 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.111 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.111 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.111 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.111 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.111 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.111 12:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.371 "name": "raid_bdev1", 00:14:10.371 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:10.371 "strip_size_kb": 0, 00:14:10.371 "state": "online", 00:14:10.371 "raid_level": "raid1", 00:14:10.371 "superblock": true, 00:14:10.371 "num_base_bdevs": 2, 00:14:10.371 "num_base_bdevs_discovered": 1, 00:14:10.371 "num_base_bdevs_operational": 1, 00:14:10.371 "base_bdevs_list": [ 00:14:10.371 { 00:14:10.371 "name": null, 00:14:10.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.371 "is_configured": false, 00:14:10.371 "data_offset": 0, 00:14:10.371 "data_size": 63488 00:14:10.371 }, 00:14:10.371 { 00:14:10.371 "name": "BaseBdev2", 00:14:10.371 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:10.371 "is_configured": true, 00:14:10.371 "data_offset": 2048, 00:14:10.371 "data_size": 63488 00:14:10.371 } 00:14:10.371 ] 00:14:10.371 }' 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.371 [2024-09-30 12:31:22.115099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:10.371 [2024-09-30 12:31:22.115195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.371 [2024-09-30 12:31:22.115256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:10.371 [2024-09-30 12:31:22.115288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.371 [2024-09-30 12:31:22.115842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.371 [2024-09-30 12:31:22.115903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:10.371 [2024-09-30 12:31:22.116018] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:10.371 [2024-09-30 12:31:22.116060] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:10.371 [2024-09-30 12:31:22.116108] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:10.371 [2024-09-30 12:31:22.116154] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:10.371 BaseBdev1 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.371 12:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.310 "name": "raid_bdev1", 00:14:11.310 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:11.310 "strip_size_kb": 0, 00:14:11.310 "state": "online", 00:14:11.310 "raid_level": "raid1", 00:14:11.310 "superblock": true, 00:14:11.310 "num_base_bdevs": 2, 00:14:11.310 "num_base_bdevs_discovered": 1, 00:14:11.310 "num_base_bdevs_operational": 1, 00:14:11.310 "base_bdevs_list": [ 00:14:11.310 { 00:14:11.310 "name": null, 00:14:11.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.310 "is_configured": false, 00:14:11.310 "data_offset": 0, 00:14:11.310 "data_size": 63488 00:14:11.310 }, 00:14:11.310 { 00:14:11.310 "name": "BaseBdev2", 00:14:11.310 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:11.310 "is_configured": true, 00:14:11.310 "data_offset": 2048, 00:14:11.310 "data_size": 63488 00:14:11.310 } 00:14:11.310 ] 00:14:11.310 }' 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.310 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.879 "name": "raid_bdev1", 00:14:11.879 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:11.879 "strip_size_kb": 0, 00:14:11.879 "state": "online", 00:14:11.879 "raid_level": "raid1", 00:14:11.879 "superblock": true, 00:14:11.879 "num_base_bdevs": 2, 00:14:11.879 "num_base_bdevs_discovered": 1, 00:14:11.879 "num_base_bdevs_operational": 1, 00:14:11.879 "base_bdevs_list": [ 00:14:11.879 { 00:14:11.879 "name": null, 00:14:11.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.879 "is_configured": false, 00:14:11.879 "data_offset": 0, 00:14:11.879 "data_size": 63488 00:14:11.879 }, 00:14:11.879 { 00:14:11.879 "name": "BaseBdev2", 00:14:11.879 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:11.879 "is_configured": true, 00:14:11.879 "data_offset": 2048, 00:14:11.879 "data_size": 63488 00:14:11.879 } 00:14:11.879 ] 00:14:11.879 }' 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:11.879 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.880 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:11.880 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.880 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.880 [2024-09-30 12:31:23.652828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.880 [2024-09-30 12:31:23.652987] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:11.880 [2024-09-30 12:31:23.653003] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:11.880 request: 00:14:11.880 { 00:14:11.880 "base_bdev": "BaseBdev1", 00:14:11.880 "raid_bdev": "raid_bdev1", 00:14:11.880 "method": "bdev_raid_add_base_bdev", 00:14:11.880 "req_id": 1 00:14:11.880 } 00:14:11.880 Got JSON-RPC error response 00:14:11.880 response: 00:14:11.880 { 00:14:11.880 "code": -22, 00:14:11.880 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:11.880 } 00:14:11.880 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:11.880 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:11.880 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.880 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.880 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.880 12:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.820 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.081 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.081 "name": "raid_bdev1", 00:14:13.081 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:13.081 "strip_size_kb": 0, 00:14:13.081 "state": "online", 00:14:13.081 "raid_level": "raid1", 00:14:13.081 "superblock": true, 00:14:13.081 "num_base_bdevs": 2, 00:14:13.081 "num_base_bdevs_discovered": 1, 00:14:13.081 "num_base_bdevs_operational": 1, 00:14:13.081 "base_bdevs_list": [ 00:14:13.081 { 00:14:13.081 "name": null, 00:14:13.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.081 "is_configured": false, 00:14:13.081 "data_offset": 0, 00:14:13.081 "data_size": 63488 00:14:13.081 }, 00:14:13.081 { 00:14:13.081 "name": "BaseBdev2", 00:14:13.081 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:13.081 "is_configured": true, 00:14:13.081 "data_offset": 2048, 00:14:13.081 "data_size": 63488 00:14:13.081 } 00:14:13.081 ] 00:14:13.081 }' 00:14:13.081 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.081 12:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.341 "name": "raid_bdev1", 00:14:13.341 "uuid": "16e88eed-60a7-433c-9166-928378a6a28e", 00:14:13.341 "strip_size_kb": 0, 00:14:13.341 "state": "online", 00:14:13.341 "raid_level": "raid1", 00:14:13.341 "superblock": true, 00:14:13.341 "num_base_bdevs": 2, 00:14:13.341 "num_base_bdevs_discovered": 1, 00:14:13.341 "num_base_bdevs_operational": 1, 00:14:13.341 "base_bdevs_list": [ 00:14:13.341 { 00:14:13.341 "name": null, 00:14:13.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.341 "is_configured": false, 00:14:13.341 "data_offset": 0, 00:14:13.341 "data_size": 63488 00:14:13.341 }, 00:14:13.341 { 00:14:13.341 "name": "BaseBdev2", 00:14:13.341 "uuid": "51d4c231-7473-5591-8aad-491cede9c518", 00:14:13.341 "is_configured": true, 00:14:13.341 "data_offset": 2048, 00:14:13.341 "data_size": 63488 00:14:13.341 } 00:14:13.341 ] 00:14:13.341 }' 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.341 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76714 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 76714 ']' 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 76714 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76714 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:13.601 killing process with pid 76714 00:14:13.601 Received shutdown signal, test time was about 16.740156 seconds 00:14:13.601 00:14:13.601 Latency(us) 00:14:13.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.601 =================================================================================================================== 00:14:13.601 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76714' 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 76714 00:14:13.601 12:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 76714 00:14:13.601 [2024-09-30 12:31:25.295882] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.601 [2024-09-30 12:31:25.296042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.601 [2024-09-30 12:31:25.296144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.602 [2024-09-30 12:31:25.296162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:13.862 [2024-09-30 12:31:25.519272] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:15.244 00:14:15.244 real 0m19.997s 00:14:15.244 user 0m25.810s 00:14:15.244 sys 0m2.229s 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.244 ************************************ 00:14:15.244 END TEST raid_rebuild_test_sb_io 00:14:15.244 ************************************ 00:14:15.244 12:31:26 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:15.244 12:31:26 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:15.244 12:31:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:15.244 12:31:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.244 12:31:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.244 ************************************ 00:14:15.244 START TEST raid_rebuild_test 00:14:15.244 ************************************ 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77397 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77397 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 77397 ']' 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.244 12:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.244 [2024-09-30 12:31:27.008357] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:15.244 [2024-09-30 12:31:27.008553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:15.244 Zero copy mechanism will not be used. 00:14:15.244 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77397 ] 00:14:15.504 [2024-09-30 12:31:27.172551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.764 [2024-09-30 12:31:27.414568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.764 [2024-09-30 12:31:27.636014] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.764 [2024-09-30 12:31:27.636154] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.024 BaseBdev1_malloc 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.024 [2024-09-30 12:31:27.885753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:16.024 [2024-09-30 12:31:27.885873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.024 [2024-09-30 12:31:27.885919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:16.024 [2024-09-30 12:31:27.885962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.024 [2024-09-30 12:31:27.888386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.024 [2024-09-30 12:31:27.888459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:16.024 BaseBdev1 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.024 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.285 BaseBdev2_malloc 00:14:16.285 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.285 12:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:16.285 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.285 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.285 [2024-09-30 12:31:27.974388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:16.285 [2024-09-30 12:31:27.974451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.285 [2024-09-30 12:31:27.974472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:16.285 [2024-09-30 12:31:27.974485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.285 [2024-09-30 12:31:27.976842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.285 [2024-09-30 12:31:27.976876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:16.285 BaseBdev2 00:14:16.285 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.285 12:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.285 12:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:16.285 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.285 12:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.285 BaseBdev3_malloc 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.285 [2024-09-30 12:31:28.035616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:16.285 [2024-09-30 12:31:28.035711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.285 [2024-09-30 12:31:28.035752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:16.285 [2024-09-30 12:31:28.035776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.285 [2024-09-30 12:31:28.038082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.285 [2024-09-30 12:31:28.038121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:16.285 BaseBdev3 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.285 BaseBdev4_malloc 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.285 [2024-09-30 12:31:28.094507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:16.285 [2024-09-30 12:31:28.094557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.285 [2024-09-30 12:31:28.094593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:16.285 [2024-09-30 12:31:28.094604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.285 [2024-09-30 12:31:28.096975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.285 [2024-09-30 12:31:28.097050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:16.285 BaseBdev4 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.285 spare_malloc 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.285 spare_delay 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.285 [2024-09-30 12:31:28.166760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:16.285 [2024-09-30 12:31:28.166810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.285 [2024-09-30 12:31:28.166830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:16.285 [2024-09-30 12:31:28.166857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.285 [2024-09-30 12:31:28.169177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.285 [2024-09-30 12:31:28.169213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:16.285 spare 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.285 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.285 [2024-09-30 12:31:28.178806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.546 [2024-09-30 12:31:28.180947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.546 [2024-09-30 12:31:28.181058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:16.546 [2024-09-30 12:31:28.181157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:16.546 [2024-09-30 12:31:28.181275] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:16.546 [2024-09-30 12:31:28.181315] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:16.546 [2024-09-30 12:31:28.181593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:16.546 [2024-09-30 12:31:28.181820] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:16.546 [2024-09-30 12:31:28.181865] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:16.546 [2024-09-30 12:31:28.182065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.546 "name": "raid_bdev1", 00:14:16.546 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:16.546 "strip_size_kb": 0, 00:14:16.546 "state": "online", 00:14:16.546 "raid_level": "raid1", 00:14:16.546 "superblock": false, 00:14:16.546 "num_base_bdevs": 4, 00:14:16.546 "num_base_bdevs_discovered": 4, 00:14:16.546 "num_base_bdevs_operational": 4, 00:14:16.546 "base_bdevs_list": [ 00:14:16.546 { 00:14:16.546 "name": "BaseBdev1", 00:14:16.546 "uuid": "62159ffb-66d9-5374-b67d-297e6ac35dce", 00:14:16.546 "is_configured": true, 00:14:16.546 "data_offset": 0, 00:14:16.546 "data_size": 65536 00:14:16.546 }, 00:14:16.546 { 00:14:16.546 "name": "BaseBdev2", 00:14:16.546 "uuid": "7daa34de-e5c9-58a9-b59c-1c51e971fb2f", 00:14:16.546 "is_configured": true, 00:14:16.546 "data_offset": 0, 00:14:16.546 "data_size": 65536 00:14:16.546 }, 00:14:16.546 { 00:14:16.546 "name": "BaseBdev3", 00:14:16.546 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:16.546 "is_configured": true, 00:14:16.546 "data_offset": 0, 00:14:16.546 "data_size": 65536 00:14:16.546 }, 00:14:16.546 { 00:14:16.546 "name": "BaseBdev4", 00:14:16.546 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:16.546 "is_configured": true, 00:14:16.546 "data_offset": 0, 00:14:16.546 "data_size": 65536 00:14:16.546 } 00:14:16.546 ] 00:14:16.546 }' 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.546 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.805 [2024-09-30 12:31:28.610276] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.805 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:17.065 [2024-09-30 12:31:28.893545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:17.065 /dev/nbd0 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:17.065 1+0 records in 00:14:17.065 1+0 records out 00:14:17.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592724 s, 6.9 MB/s 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:17.065 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.324 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:17.324 12:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:17.325 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:17.325 12:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.325 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:17.325 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:17.325 12:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:22.610 65536+0 records in 00:14:22.610 65536+0 records out 00:14:22.610 33554432 bytes (34 MB, 32 MiB) copied, 5.39953 s, 6.2 MB/s 00:14:22.610 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:22.610 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.610 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:22.610 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.610 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:22.610 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.610 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:22.870 [2024-09-30 12:31:34.566606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.870 [2024-09-30 12:31:34.602621] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.870 "name": "raid_bdev1", 00:14:22.870 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:22.870 "strip_size_kb": 0, 00:14:22.870 "state": "online", 00:14:22.870 "raid_level": "raid1", 00:14:22.870 "superblock": false, 00:14:22.870 "num_base_bdevs": 4, 00:14:22.870 "num_base_bdevs_discovered": 3, 00:14:22.870 "num_base_bdevs_operational": 3, 00:14:22.870 "base_bdevs_list": [ 00:14:22.870 { 00:14:22.870 "name": null, 00:14:22.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.870 "is_configured": false, 00:14:22.870 "data_offset": 0, 00:14:22.870 "data_size": 65536 00:14:22.870 }, 00:14:22.870 { 00:14:22.870 "name": "BaseBdev2", 00:14:22.870 "uuid": "7daa34de-e5c9-58a9-b59c-1c51e971fb2f", 00:14:22.870 "is_configured": true, 00:14:22.870 "data_offset": 0, 00:14:22.870 "data_size": 65536 00:14:22.870 }, 00:14:22.870 { 00:14:22.870 "name": "BaseBdev3", 00:14:22.870 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:22.870 "is_configured": true, 00:14:22.870 "data_offset": 0, 00:14:22.870 "data_size": 65536 00:14:22.870 }, 00:14:22.870 { 00:14:22.870 "name": "BaseBdev4", 00:14:22.870 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:22.870 "is_configured": true, 00:14:22.870 "data_offset": 0, 00:14:22.870 "data_size": 65536 00:14:22.870 } 00:14:22.870 ] 00:14:22.870 }' 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.870 12:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.130 12:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:23.130 12:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.130 12:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.130 [2024-09-30 12:31:35.009892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.130 [2024-09-30 12:31:35.022642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:23.398 12:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.398 12:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:23.398 [2024-09-30 12:31:35.024846] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.346 "name": "raid_bdev1", 00:14:24.346 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:24.346 "strip_size_kb": 0, 00:14:24.346 "state": "online", 00:14:24.346 "raid_level": "raid1", 00:14:24.346 "superblock": false, 00:14:24.346 "num_base_bdevs": 4, 00:14:24.346 "num_base_bdevs_discovered": 4, 00:14:24.346 "num_base_bdevs_operational": 4, 00:14:24.346 "process": { 00:14:24.346 "type": "rebuild", 00:14:24.346 "target": "spare", 00:14:24.346 "progress": { 00:14:24.346 "blocks": 20480, 00:14:24.346 "percent": 31 00:14:24.346 } 00:14:24.346 }, 00:14:24.346 "base_bdevs_list": [ 00:14:24.346 { 00:14:24.346 "name": "spare", 00:14:24.346 "uuid": "26cfe7b6-d851-5a2d-ac35-55c84aba8be6", 00:14:24.346 "is_configured": true, 00:14:24.346 "data_offset": 0, 00:14:24.346 "data_size": 65536 00:14:24.346 }, 00:14:24.346 { 00:14:24.346 "name": "BaseBdev2", 00:14:24.346 "uuid": "7daa34de-e5c9-58a9-b59c-1c51e971fb2f", 00:14:24.346 "is_configured": true, 00:14:24.346 "data_offset": 0, 00:14:24.346 "data_size": 65536 00:14:24.346 }, 00:14:24.346 { 00:14:24.346 "name": "BaseBdev3", 00:14:24.346 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:24.346 "is_configured": true, 00:14:24.346 "data_offset": 0, 00:14:24.346 "data_size": 65536 00:14:24.346 }, 00:14:24.346 { 00:14:24.346 "name": "BaseBdev4", 00:14:24.346 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:24.346 "is_configured": true, 00:14:24.346 "data_offset": 0, 00:14:24.346 "data_size": 65536 00:14:24.346 } 00:14:24.346 ] 00:14:24.346 }' 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.346 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.346 [2024-09-30 12:31:36.160766] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.346 [2024-09-30 12:31:36.233458] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:24.346 [2024-09-30 12:31:36.233531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.346 [2024-09-30 12:31:36.233549] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.346 [2024-09-30 12:31:36.233559] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.606 "name": "raid_bdev1", 00:14:24.606 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:24.606 "strip_size_kb": 0, 00:14:24.606 "state": "online", 00:14:24.606 "raid_level": "raid1", 00:14:24.606 "superblock": false, 00:14:24.606 "num_base_bdevs": 4, 00:14:24.606 "num_base_bdevs_discovered": 3, 00:14:24.606 "num_base_bdevs_operational": 3, 00:14:24.606 "base_bdevs_list": [ 00:14:24.606 { 00:14:24.606 "name": null, 00:14:24.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.606 "is_configured": false, 00:14:24.606 "data_offset": 0, 00:14:24.606 "data_size": 65536 00:14:24.606 }, 00:14:24.606 { 00:14:24.606 "name": "BaseBdev2", 00:14:24.606 "uuid": "7daa34de-e5c9-58a9-b59c-1c51e971fb2f", 00:14:24.606 "is_configured": true, 00:14:24.606 "data_offset": 0, 00:14:24.606 "data_size": 65536 00:14:24.606 }, 00:14:24.606 { 00:14:24.606 "name": "BaseBdev3", 00:14:24.606 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:24.606 "is_configured": true, 00:14:24.606 "data_offset": 0, 00:14:24.606 "data_size": 65536 00:14:24.606 }, 00:14:24.606 { 00:14:24.606 "name": "BaseBdev4", 00:14:24.606 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:24.606 "is_configured": true, 00:14:24.606 "data_offset": 0, 00:14:24.606 "data_size": 65536 00:14:24.606 } 00:14:24.606 ] 00:14:24.606 }' 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.606 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.866 "name": "raid_bdev1", 00:14:24.866 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:24.866 "strip_size_kb": 0, 00:14:24.866 "state": "online", 00:14:24.866 "raid_level": "raid1", 00:14:24.866 "superblock": false, 00:14:24.866 "num_base_bdevs": 4, 00:14:24.866 "num_base_bdevs_discovered": 3, 00:14:24.866 "num_base_bdevs_operational": 3, 00:14:24.866 "base_bdevs_list": [ 00:14:24.866 { 00:14:24.866 "name": null, 00:14:24.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.866 "is_configured": false, 00:14:24.866 "data_offset": 0, 00:14:24.866 "data_size": 65536 00:14:24.866 }, 00:14:24.866 { 00:14:24.866 "name": "BaseBdev2", 00:14:24.866 "uuid": "7daa34de-e5c9-58a9-b59c-1c51e971fb2f", 00:14:24.866 "is_configured": true, 00:14:24.866 "data_offset": 0, 00:14:24.866 "data_size": 65536 00:14:24.866 }, 00:14:24.866 { 00:14:24.866 "name": "BaseBdev3", 00:14:24.866 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:24.866 "is_configured": true, 00:14:24.866 "data_offset": 0, 00:14:24.866 "data_size": 65536 00:14:24.866 }, 00:14:24.866 { 00:14:24.866 "name": "BaseBdev4", 00:14:24.866 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:24.866 "is_configured": true, 00:14:24.866 "data_offset": 0, 00:14:24.866 "data_size": 65536 00:14:24.866 } 00:14:24.866 ] 00:14:24.866 }' 00:14:24.866 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.126 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.126 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.126 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.126 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.126 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.126 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.126 [2024-09-30 12:31:36.851202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.126 [2024-09-30 12:31:36.864922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:25.126 12:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.126 12:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:25.126 [2024-09-30 12:31:36.867112] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.066 12:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.066 12:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.066 12:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.066 12:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.066 12:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.066 12:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.066 12:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.066 12:31:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.066 12:31:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.066 12:31:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.066 12:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.066 "name": "raid_bdev1", 00:14:26.066 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:26.066 "strip_size_kb": 0, 00:14:26.066 "state": "online", 00:14:26.066 "raid_level": "raid1", 00:14:26.066 "superblock": false, 00:14:26.066 "num_base_bdevs": 4, 00:14:26.066 "num_base_bdevs_discovered": 4, 00:14:26.066 "num_base_bdevs_operational": 4, 00:14:26.066 "process": { 00:14:26.066 "type": "rebuild", 00:14:26.066 "target": "spare", 00:14:26.066 "progress": { 00:14:26.066 "blocks": 20480, 00:14:26.066 "percent": 31 00:14:26.066 } 00:14:26.066 }, 00:14:26.066 "base_bdevs_list": [ 00:14:26.066 { 00:14:26.066 "name": "spare", 00:14:26.066 "uuid": "26cfe7b6-d851-5a2d-ac35-55c84aba8be6", 00:14:26.066 "is_configured": true, 00:14:26.066 "data_offset": 0, 00:14:26.066 "data_size": 65536 00:14:26.066 }, 00:14:26.066 { 00:14:26.066 "name": "BaseBdev2", 00:14:26.066 "uuid": "7daa34de-e5c9-58a9-b59c-1c51e971fb2f", 00:14:26.066 "is_configured": true, 00:14:26.066 "data_offset": 0, 00:14:26.066 "data_size": 65536 00:14:26.066 }, 00:14:26.066 { 00:14:26.066 "name": "BaseBdev3", 00:14:26.066 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:26.066 "is_configured": true, 00:14:26.067 "data_offset": 0, 00:14:26.067 "data_size": 65536 00:14:26.067 }, 00:14:26.067 { 00:14:26.067 "name": "BaseBdev4", 00:14:26.067 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:26.067 "is_configured": true, 00:14:26.067 "data_offset": 0, 00:14:26.067 "data_size": 65536 00:14:26.067 } 00:14:26.067 ] 00:14:26.067 }' 00:14:26.067 12:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.327 12:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.327 12:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.327 [2024-09-30 12:31:38.026855] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:26.327 [2024-09-30 12:31:38.074161] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.327 "name": "raid_bdev1", 00:14:26.327 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:26.327 "strip_size_kb": 0, 00:14:26.327 "state": "online", 00:14:26.327 "raid_level": "raid1", 00:14:26.327 "superblock": false, 00:14:26.327 "num_base_bdevs": 4, 00:14:26.327 "num_base_bdevs_discovered": 3, 00:14:26.327 "num_base_bdevs_operational": 3, 00:14:26.327 "process": { 00:14:26.327 "type": "rebuild", 00:14:26.327 "target": "spare", 00:14:26.327 "progress": { 00:14:26.327 "blocks": 24576, 00:14:26.327 "percent": 37 00:14:26.327 } 00:14:26.327 }, 00:14:26.327 "base_bdevs_list": [ 00:14:26.327 { 00:14:26.327 "name": "spare", 00:14:26.327 "uuid": "26cfe7b6-d851-5a2d-ac35-55c84aba8be6", 00:14:26.327 "is_configured": true, 00:14:26.327 "data_offset": 0, 00:14:26.327 "data_size": 65536 00:14:26.327 }, 00:14:26.327 { 00:14:26.327 "name": null, 00:14:26.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.327 "is_configured": false, 00:14:26.327 "data_offset": 0, 00:14:26.327 "data_size": 65536 00:14:26.327 }, 00:14:26.327 { 00:14:26.327 "name": "BaseBdev3", 00:14:26.327 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:26.327 "is_configured": true, 00:14:26.327 "data_offset": 0, 00:14:26.327 "data_size": 65536 00:14:26.327 }, 00:14:26.327 { 00:14:26.327 "name": "BaseBdev4", 00:14:26.327 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:26.327 "is_configured": true, 00:14:26.327 "data_offset": 0, 00:14:26.327 "data_size": 65536 00:14:26.327 } 00:14:26.327 ] 00:14:26.327 }' 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.327 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.587 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.587 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=443 00:14:26.587 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.587 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.587 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.588 "name": "raid_bdev1", 00:14:26.588 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:26.588 "strip_size_kb": 0, 00:14:26.588 "state": "online", 00:14:26.588 "raid_level": "raid1", 00:14:26.588 "superblock": false, 00:14:26.588 "num_base_bdevs": 4, 00:14:26.588 "num_base_bdevs_discovered": 3, 00:14:26.588 "num_base_bdevs_operational": 3, 00:14:26.588 "process": { 00:14:26.588 "type": "rebuild", 00:14:26.588 "target": "spare", 00:14:26.588 "progress": { 00:14:26.588 "blocks": 26624, 00:14:26.588 "percent": 40 00:14:26.588 } 00:14:26.588 }, 00:14:26.588 "base_bdevs_list": [ 00:14:26.588 { 00:14:26.588 "name": "spare", 00:14:26.588 "uuid": "26cfe7b6-d851-5a2d-ac35-55c84aba8be6", 00:14:26.588 "is_configured": true, 00:14:26.588 "data_offset": 0, 00:14:26.588 "data_size": 65536 00:14:26.588 }, 00:14:26.588 { 00:14:26.588 "name": null, 00:14:26.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.588 "is_configured": false, 00:14:26.588 "data_offset": 0, 00:14:26.588 "data_size": 65536 00:14:26.588 }, 00:14:26.588 { 00:14:26.588 "name": "BaseBdev3", 00:14:26.588 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:26.588 "is_configured": true, 00:14:26.588 "data_offset": 0, 00:14:26.588 "data_size": 65536 00:14:26.588 }, 00:14:26.588 { 00:14:26.588 "name": "BaseBdev4", 00:14:26.588 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:26.588 "is_configured": true, 00:14:26.588 "data_offset": 0, 00:14:26.588 "data_size": 65536 00:14:26.588 } 00:14:26.588 ] 00:14:26.588 }' 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.588 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.524 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.524 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.524 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.524 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.524 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.524 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.524 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.524 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.524 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.524 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.524 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.783 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.783 "name": "raid_bdev1", 00:14:27.783 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:27.783 "strip_size_kb": 0, 00:14:27.783 "state": "online", 00:14:27.783 "raid_level": "raid1", 00:14:27.783 "superblock": false, 00:14:27.783 "num_base_bdevs": 4, 00:14:27.783 "num_base_bdevs_discovered": 3, 00:14:27.783 "num_base_bdevs_operational": 3, 00:14:27.783 "process": { 00:14:27.783 "type": "rebuild", 00:14:27.783 "target": "spare", 00:14:27.783 "progress": { 00:14:27.783 "blocks": 51200, 00:14:27.783 "percent": 78 00:14:27.783 } 00:14:27.783 }, 00:14:27.783 "base_bdevs_list": [ 00:14:27.783 { 00:14:27.783 "name": "spare", 00:14:27.783 "uuid": "26cfe7b6-d851-5a2d-ac35-55c84aba8be6", 00:14:27.783 "is_configured": true, 00:14:27.783 "data_offset": 0, 00:14:27.783 "data_size": 65536 00:14:27.783 }, 00:14:27.783 { 00:14:27.783 "name": null, 00:14:27.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.783 "is_configured": false, 00:14:27.783 "data_offset": 0, 00:14:27.783 "data_size": 65536 00:14:27.783 }, 00:14:27.783 { 00:14:27.783 "name": "BaseBdev3", 00:14:27.783 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:27.783 "is_configured": true, 00:14:27.783 "data_offset": 0, 00:14:27.783 "data_size": 65536 00:14:27.783 }, 00:14:27.783 { 00:14:27.783 "name": "BaseBdev4", 00:14:27.783 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:27.783 "is_configured": true, 00:14:27.783 "data_offset": 0, 00:14:27.783 "data_size": 65536 00:14:27.783 } 00:14:27.783 ] 00:14:27.783 }' 00:14:27.783 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.783 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.783 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.783 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.783 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:28.352 [2024-09-30 12:31:40.082014] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:28.352 [2024-09-30 12:31:40.082132] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:28.352 [2024-09-30 12:31:40.082204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.921 "name": "raid_bdev1", 00:14:28.921 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:28.921 "strip_size_kb": 0, 00:14:28.921 "state": "online", 00:14:28.921 "raid_level": "raid1", 00:14:28.921 "superblock": false, 00:14:28.921 "num_base_bdevs": 4, 00:14:28.921 "num_base_bdevs_discovered": 3, 00:14:28.921 "num_base_bdevs_operational": 3, 00:14:28.921 "base_bdevs_list": [ 00:14:28.921 { 00:14:28.921 "name": "spare", 00:14:28.921 "uuid": "26cfe7b6-d851-5a2d-ac35-55c84aba8be6", 00:14:28.921 "is_configured": true, 00:14:28.921 "data_offset": 0, 00:14:28.921 "data_size": 65536 00:14:28.921 }, 00:14:28.921 { 00:14:28.921 "name": null, 00:14:28.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.921 "is_configured": false, 00:14:28.921 "data_offset": 0, 00:14:28.921 "data_size": 65536 00:14:28.921 }, 00:14:28.921 { 00:14:28.921 "name": "BaseBdev3", 00:14:28.921 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:28.921 "is_configured": true, 00:14:28.921 "data_offset": 0, 00:14:28.921 "data_size": 65536 00:14:28.921 }, 00:14:28.921 { 00:14:28.921 "name": "BaseBdev4", 00:14:28.921 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:28.921 "is_configured": true, 00:14:28.921 "data_offset": 0, 00:14:28.921 "data_size": 65536 00:14:28.921 } 00:14:28.921 ] 00:14:28.921 }' 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.921 "name": "raid_bdev1", 00:14:28.921 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:28.921 "strip_size_kb": 0, 00:14:28.921 "state": "online", 00:14:28.921 "raid_level": "raid1", 00:14:28.921 "superblock": false, 00:14:28.921 "num_base_bdevs": 4, 00:14:28.921 "num_base_bdevs_discovered": 3, 00:14:28.921 "num_base_bdevs_operational": 3, 00:14:28.921 "base_bdevs_list": [ 00:14:28.921 { 00:14:28.921 "name": "spare", 00:14:28.921 "uuid": "26cfe7b6-d851-5a2d-ac35-55c84aba8be6", 00:14:28.921 "is_configured": true, 00:14:28.921 "data_offset": 0, 00:14:28.921 "data_size": 65536 00:14:28.921 }, 00:14:28.921 { 00:14:28.921 "name": null, 00:14:28.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.921 "is_configured": false, 00:14:28.921 "data_offset": 0, 00:14:28.921 "data_size": 65536 00:14:28.921 }, 00:14:28.921 { 00:14:28.921 "name": "BaseBdev3", 00:14:28.921 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:28.921 "is_configured": true, 00:14:28.921 "data_offset": 0, 00:14:28.921 "data_size": 65536 00:14:28.921 }, 00:14:28.921 { 00:14:28.921 "name": "BaseBdev4", 00:14:28.921 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:28.921 "is_configured": true, 00:14:28.921 "data_offset": 0, 00:14:28.921 "data_size": 65536 00:14:28.921 } 00:14:28.921 ] 00:14:28.921 }' 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.921 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.181 "name": "raid_bdev1", 00:14:29.181 "uuid": "909011eb-c5f5-4b88-8259-f2492040d18a", 00:14:29.181 "strip_size_kb": 0, 00:14:29.181 "state": "online", 00:14:29.181 "raid_level": "raid1", 00:14:29.181 "superblock": false, 00:14:29.181 "num_base_bdevs": 4, 00:14:29.181 "num_base_bdevs_discovered": 3, 00:14:29.181 "num_base_bdevs_operational": 3, 00:14:29.181 "base_bdevs_list": [ 00:14:29.181 { 00:14:29.181 "name": "spare", 00:14:29.181 "uuid": "26cfe7b6-d851-5a2d-ac35-55c84aba8be6", 00:14:29.181 "is_configured": true, 00:14:29.181 "data_offset": 0, 00:14:29.181 "data_size": 65536 00:14:29.181 }, 00:14:29.181 { 00:14:29.181 "name": null, 00:14:29.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.181 "is_configured": false, 00:14:29.181 "data_offset": 0, 00:14:29.181 "data_size": 65536 00:14:29.181 }, 00:14:29.181 { 00:14:29.181 "name": "BaseBdev3", 00:14:29.181 "uuid": "04991482-af62-5513-ba2d-d5cda6dc5f2b", 00:14:29.181 "is_configured": true, 00:14:29.181 "data_offset": 0, 00:14:29.181 "data_size": 65536 00:14:29.181 }, 00:14:29.181 { 00:14:29.181 "name": "BaseBdev4", 00:14:29.181 "uuid": "144cb542-d091-5815-9376-5d1e70f4a485", 00:14:29.181 "is_configured": true, 00:14:29.181 "data_offset": 0, 00:14:29.181 "data_size": 65536 00:14:29.181 } 00:14:29.181 ] 00:14:29.181 }' 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.181 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.441 [2024-09-30 12:31:41.187283] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.441 [2024-09-30 12:31:41.187415] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.441 [2024-09-30 12:31:41.187522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.441 [2024-09-30 12:31:41.187632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.441 [2024-09-30 12:31:41.187687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.441 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:29.701 /dev/nbd0 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.701 1+0 records in 00:14:29.701 1+0 records out 00:14:29.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353943 s, 11.6 MB/s 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.701 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:29.961 /dev/nbd1 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.961 1+0 records in 00:14:29.961 1+0 records out 00:14:29.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037505 s, 10.9 MB/s 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.961 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:30.221 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:30.221 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.221 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:30.221 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:30.221 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:30.221 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.221 12:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.481 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77397 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 77397 ']' 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 77397 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77397 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:30.741 killing process with pid 77397 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77397' 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 77397 00:14:30.741 Received shutdown signal, test time was about 60.000000 seconds 00:14:30.741 00:14:30.741 Latency(us) 00:14:30.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.741 =================================================================================================================== 00:14:30.741 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:30.741 [2024-09-30 12:31:42.424303] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.741 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 77397 00:14:31.310 [2024-09-30 12:31:42.909571] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:32.693 00:14:32.693 real 0m17.249s 00:14:32.693 user 0m19.009s 00:14:32.693 sys 0m3.333s 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.693 ************************************ 00:14:32.693 END TEST raid_rebuild_test 00:14:32.693 ************************************ 00:14:32.693 12:31:44 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:32.693 12:31:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:32.693 12:31:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.693 12:31:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.693 ************************************ 00:14:32.693 START TEST raid_rebuild_test_sb 00:14:32.693 ************************************ 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:32.693 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:32.694 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77844 00:14:32.694 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:32.694 12:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77844 00:14:32.694 12:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77844 ']' 00:14:32.694 12:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.694 12:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.694 12:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.694 12:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.694 12:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.694 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:32.694 Zero copy mechanism will not be used. 00:14:32.694 [2024-09-30 12:31:44.344380] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:32.694 [2024-09-30 12:31:44.344500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77844 ] 00:14:32.694 [2024-09-30 12:31:44.508050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.954 [2024-09-30 12:31:44.716703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.214 [2024-09-30 12:31:44.939856] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.214 [2024-09-30 12:31:44.939922] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 BaseBdev1_malloc 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 [2024-09-30 12:31:45.217714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:33.474 [2024-09-30 12:31:45.217798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.474 [2024-09-30 12:31:45.217824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:33.474 [2024-09-30 12:31:45.217838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.474 [2024-09-30 12:31:45.219893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.474 [2024-09-30 12:31:45.219937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:33.474 BaseBdev1 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 BaseBdev2_malloc 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 [2024-09-30 12:31:45.285924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:33.474 [2024-09-30 12:31:45.285997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.474 [2024-09-30 12:31:45.286016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:33.474 [2024-09-30 12:31:45.286027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.474 [2024-09-30 12:31:45.288047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.474 [2024-09-30 12:31:45.288087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:33.474 BaseBdev2 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 BaseBdev3_malloc 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 [2024-09-30 12:31:45.337141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:33.474 [2024-09-30 12:31:45.337201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.474 [2024-09-30 12:31:45.337220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:33.474 [2024-09-30 12:31:45.337231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.474 [2024-09-30 12:31:45.339197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.474 [2024-09-30 12:31:45.339237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:33.474 BaseBdev3 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.474 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.735 BaseBdev4_malloc 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.735 [2024-09-30 12:31:45.393146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:33.735 [2024-09-30 12:31:45.393204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.735 [2024-09-30 12:31:45.393223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:33.735 [2024-09-30 12:31:45.393233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.735 [2024-09-30 12:31:45.395195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.735 [2024-09-30 12:31:45.395238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:33.735 BaseBdev4 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.735 spare_malloc 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.735 spare_delay 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.735 [2024-09-30 12:31:45.461448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:33.735 [2024-09-30 12:31:45.461518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.735 [2024-09-30 12:31:45.461538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:33.735 [2024-09-30 12:31:45.461549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.735 [2024-09-30 12:31:45.463530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.735 [2024-09-30 12:31:45.463570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:33.735 spare 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.735 [2024-09-30 12:31:45.473489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.735 [2024-09-30 12:31:45.475191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.735 [2024-09-30 12:31:45.475260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.735 [2024-09-30 12:31:45.475312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:33.735 [2024-09-30 12:31:45.475500] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:33.735 [2024-09-30 12:31:45.475593] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:33.735 [2024-09-30 12:31:45.475842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:33.735 [2024-09-30 12:31:45.476016] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:33.735 [2024-09-30 12:31:45.476034] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:33.735 [2024-09-30 12:31:45.476183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.735 "name": "raid_bdev1", 00:14:33.735 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:33.735 "strip_size_kb": 0, 00:14:33.735 "state": "online", 00:14:33.735 "raid_level": "raid1", 00:14:33.735 "superblock": true, 00:14:33.735 "num_base_bdevs": 4, 00:14:33.735 "num_base_bdevs_discovered": 4, 00:14:33.735 "num_base_bdevs_operational": 4, 00:14:33.735 "base_bdevs_list": [ 00:14:33.735 { 00:14:33.735 "name": "BaseBdev1", 00:14:33.735 "uuid": "57e19b87-4988-57df-be8c-f04832707358", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 }, 00:14:33.735 { 00:14:33.735 "name": "BaseBdev2", 00:14:33.735 "uuid": "2a7c95e1-958e-58a0-9932-c247068d5c32", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 }, 00:14:33.735 { 00:14:33.735 "name": "BaseBdev3", 00:14:33.735 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 }, 00:14:33.735 { 00:14:33.735 "name": "BaseBdev4", 00:14:33.735 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 } 00:14:33.735 ] 00:14:33.735 }' 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.735 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.305 [2024-09-30 12:31:45.901073] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.305 12:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:34.305 [2024-09-30 12:31:46.152350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:34.305 /dev/nbd0 00:14:34.305 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:34.305 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:34.305 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:34.305 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:34.305 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:34.305 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:34.305 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:34.305 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:34.305 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:34.305 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:34.305 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:34.565 1+0 records in 00:14:34.565 1+0 records out 00:14:34.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367295 s, 11.2 MB/s 00:14:34.565 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.565 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:34.565 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.565 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:34.565 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:34.565 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:34.565 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.565 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:34.565 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:34.565 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:39.846 63488+0 records in 00:14:39.846 63488+0 records out 00:14:39.846 32505856 bytes (33 MB, 31 MiB) copied, 5.25115 s, 6.2 MB/s 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.846 [2024-09-30 12:31:51.661226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.846 [2024-09-30 12:31:51.697242] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.846 12:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.106 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.106 "name": "raid_bdev1", 00:14:40.106 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:40.106 "strip_size_kb": 0, 00:14:40.106 "state": "online", 00:14:40.106 "raid_level": "raid1", 00:14:40.106 "superblock": true, 00:14:40.106 "num_base_bdevs": 4, 00:14:40.106 "num_base_bdevs_discovered": 3, 00:14:40.106 "num_base_bdevs_operational": 3, 00:14:40.106 "base_bdevs_list": [ 00:14:40.106 { 00:14:40.106 "name": null, 00:14:40.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.106 "is_configured": false, 00:14:40.106 "data_offset": 0, 00:14:40.106 "data_size": 63488 00:14:40.106 }, 00:14:40.106 { 00:14:40.106 "name": "BaseBdev2", 00:14:40.106 "uuid": "2a7c95e1-958e-58a0-9932-c247068d5c32", 00:14:40.106 "is_configured": true, 00:14:40.106 "data_offset": 2048, 00:14:40.106 "data_size": 63488 00:14:40.106 }, 00:14:40.106 { 00:14:40.106 "name": "BaseBdev3", 00:14:40.106 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:40.106 "is_configured": true, 00:14:40.106 "data_offset": 2048, 00:14:40.106 "data_size": 63488 00:14:40.106 }, 00:14:40.106 { 00:14:40.106 "name": "BaseBdev4", 00:14:40.106 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:40.106 "is_configured": true, 00:14:40.106 "data_offset": 2048, 00:14:40.106 "data_size": 63488 00:14:40.106 } 00:14:40.106 ] 00:14:40.106 }' 00:14:40.106 12:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.106 12:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.366 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.366 12:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.366 12:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.366 [2024-09-30 12:31:52.144455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.366 [2024-09-30 12:31:52.157422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:40.366 12:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.366 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:40.366 [2024-09-30 12:31:52.159098] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.323 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.323 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.323 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.323 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.323 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.323 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.323 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.323 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.323 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.323 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.603 "name": "raid_bdev1", 00:14:41.603 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:41.603 "strip_size_kb": 0, 00:14:41.603 "state": "online", 00:14:41.603 "raid_level": "raid1", 00:14:41.603 "superblock": true, 00:14:41.603 "num_base_bdevs": 4, 00:14:41.603 "num_base_bdevs_discovered": 4, 00:14:41.603 "num_base_bdevs_operational": 4, 00:14:41.603 "process": { 00:14:41.603 "type": "rebuild", 00:14:41.603 "target": "spare", 00:14:41.603 "progress": { 00:14:41.603 "blocks": 20480, 00:14:41.603 "percent": 32 00:14:41.603 } 00:14:41.603 }, 00:14:41.603 "base_bdevs_list": [ 00:14:41.603 { 00:14:41.603 "name": "spare", 00:14:41.603 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:41.603 "is_configured": true, 00:14:41.603 "data_offset": 2048, 00:14:41.603 "data_size": 63488 00:14:41.603 }, 00:14:41.603 { 00:14:41.603 "name": "BaseBdev2", 00:14:41.603 "uuid": "2a7c95e1-958e-58a0-9932-c247068d5c32", 00:14:41.603 "is_configured": true, 00:14:41.603 "data_offset": 2048, 00:14:41.603 "data_size": 63488 00:14:41.603 }, 00:14:41.603 { 00:14:41.603 "name": "BaseBdev3", 00:14:41.603 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:41.603 "is_configured": true, 00:14:41.603 "data_offset": 2048, 00:14:41.603 "data_size": 63488 00:14:41.603 }, 00:14:41.603 { 00:14:41.603 "name": "BaseBdev4", 00:14:41.603 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:41.603 "is_configured": true, 00:14:41.603 "data_offset": 2048, 00:14:41.603 "data_size": 63488 00:14:41.603 } 00:14:41.603 ] 00:14:41.603 }' 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.603 [2024-09-30 12:31:53.323634] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.603 [2024-09-30 12:31:53.363549] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:41.603 [2024-09-30 12:31:53.363606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.603 [2024-09-30 12:31:53.363622] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.603 [2024-09-30 12:31:53.363630] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.603 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.603 "name": "raid_bdev1", 00:14:41.603 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:41.603 "strip_size_kb": 0, 00:14:41.603 "state": "online", 00:14:41.603 "raid_level": "raid1", 00:14:41.603 "superblock": true, 00:14:41.603 "num_base_bdevs": 4, 00:14:41.603 "num_base_bdevs_discovered": 3, 00:14:41.603 "num_base_bdevs_operational": 3, 00:14:41.603 "base_bdevs_list": [ 00:14:41.603 { 00:14:41.603 "name": null, 00:14:41.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.603 "is_configured": false, 00:14:41.603 "data_offset": 0, 00:14:41.603 "data_size": 63488 00:14:41.603 }, 00:14:41.603 { 00:14:41.603 "name": "BaseBdev2", 00:14:41.603 "uuid": "2a7c95e1-958e-58a0-9932-c247068d5c32", 00:14:41.603 "is_configured": true, 00:14:41.603 "data_offset": 2048, 00:14:41.603 "data_size": 63488 00:14:41.603 }, 00:14:41.603 { 00:14:41.603 "name": "BaseBdev3", 00:14:41.603 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:41.603 "is_configured": true, 00:14:41.603 "data_offset": 2048, 00:14:41.604 "data_size": 63488 00:14:41.604 }, 00:14:41.604 { 00:14:41.604 "name": "BaseBdev4", 00:14:41.604 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:41.604 "is_configured": true, 00:14:41.604 "data_offset": 2048, 00:14:41.604 "data_size": 63488 00:14:41.604 } 00:14:41.604 ] 00:14:41.604 }' 00:14:41.604 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.604 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.173 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.174 "name": "raid_bdev1", 00:14:42.174 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:42.174 "strip_size_kb": 0, 00:14:42.174 "state": "online", 00:14:42.174 "raid_level": "raid1", 00:14:42.174 "superblock": true, 00:14:42.174 "num_base_bdevs": 4, 00:14:42.174 "num_base_bdevs_discovered": 3, 00:14:42.174 "num_base_bdevs_operational": 3, 00:14:42.174 "base_bdevs_list": [ 00:14:42.174 { 00:14:42.174 "name": null, 00:14:42.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.174 "is_configured": false, 00:14:42.174 "data_offset": 0, 00:14:42.174 "data_size": 63488 00:14:42.174 }, 00:14:42.174 { 00:14:42.174 "name": "BaseBdev2", 00:14:42.174 "uuid": "2a7c95e1-958e-58a0-9932-c247068d5c32", 00:14:42.174 "is_configured": true, 00:14:42.174 "data_offset": 2048, 00:14:42.174 "data_size": 63488 00:14:42.174 }, 00:14:42.174 { 00:14:42.174 "name": "BaseBdev3", 00:14:42.174 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:42.174 "is_configured": true, 00:14:42.174 "data_offset": 2048, 00:14:42.174 "data_size": 63488 00:14:42.174 }, 00:14:42.174 { 00:14:42.174 "name": "BaseBdev4", 00:14:42.174 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:42.174 "is_configured": true, 00:14:42.174 "data_offset": 2048, 00:14:42.174 "data_size": 63488 00:14:42.174 } 00:14:42.174 ] 00:14:42.174 }' 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.174 [2024-09-30 12:31:53.945269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.174 [2024-09-30 12:31:53.958465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.174 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:42.174 [2024-09-30 12:31:53.960237] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.113 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.113 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.113 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.113 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.113 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.113 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.113 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.113 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.113 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.113 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.373 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.373 "name": "raid_bdev1", 00:14:43.373 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:43.374 "strip_size_kb": 0, 00:14:43.374 "state": "online", 00:14:43.374 "raid_level": "raid1", 00:14:43.374 "superblock": true, 00:14:43.374 "num_base_bdevs": 4, 00:14:43.374 "num_base_bdevs_discovered": 4, 00:14:43.374 "num_base_bdevs_operational": 4, 00:14:43.374 "process": { 00:14:43.374 "type": "rebuild", 00:14:43.374 "target": "spare", 00:14:43.374 "progress": { 00:14:43.374 "blocks": 20480, 00:14:43.374 "percent": 32 00:14:43.374 } 00:14:43.374 }, 00:14:43.374 "base_bdevs_list": [ 00:14:43.374 { 00:14:43.374 "name": "spare", 00:14:43.374 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:43.374 "is_configured": true, 00:14:43.374 "data_offset": 2048, 00:14:43.374 "data_size": 63488 00:14:43.374 }, 00:14:43.374 { 00:14:43.374 "name": "BaseBdev2", 00:14:43.374 "uuid": "2a7c95e1-958e-58a0-9932-c247068d5c32", 00:14:43.374 "is_configured": true, 00:14:43.374 "data_offset": 2048, 00:14:43.374 "data_size": 63488 00:14:43.374 }, 00:14:43.374 { 00:14:43.374 "name": "BaseBdev3", 00:14:43.374 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:43.374 "is_configured": true, 00:14:43.374 "data_offset": 2048, 00:14:43.374 "data_size": 63488 00:14:43.374 }, 00:14:43.374 { 00:14:43.374 "name": "BaseBdev4", 00:14:43.374 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:43.374 "is_configured": true, 00:14:43.374 "data_offset": 2048, 00:14:43.374 "data_size": 63488 00:14:43.374 } 00:14:43.374 ] 00:14:43.374 }' 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:43.374 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.374 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.374 [2024-09-30 12:31:55.112123] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:43.374 [2024-09-30 12:31:55.264382] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.634 "name": "raid_bdev1", 00:14:43.634 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:43.634 "strip_size_kb": 0, 00:14:43.634 "state": "online", 00:14:43.634 "raid_level": "raid1", 00:14:43.634 "superblock": true, 00:14:43.634 "num_base_bdevs": 4, 00:14:43.634 "num_base_bdevs_discovered": 3, 00:14:43.634 "num_base_bdevs_operational": 3, 00:14:43.634 "process": { 00:14:43.634 "type": "rebuild", 00:14:43.634 "target": "spare", 00:14:43.634 "progress": { 00:14:43.634 "blocks": 24576, 00:14:43.634 "percent": 38 00:14:43.634 } 00:14:43.634 }, 00:14:43.634 "base_bdevs_list": [ 00:14:43.634 { 00:14:43.634 "name": "spare", 00:14:43.634 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:43.634 "is_configured": true, 00:14:43.634 "data_offset": 2048, 00:14:43.634 "data_size": 63488 00:14:43.634 }, 00:14:43.634 { 00:14:43.634 "name": null, 00:14:43.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.634 "is_configured": false, 00:14:43.634 "data_offset": 0, 00:14:43.634 "data_size": 63488 00:14:43.634 }, 00:14:43.634 { 00:14:43.634 "name": "BaseBdev3", 00:14:43.634 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:43.634 "is_configured": true, 00:14:43.634 "data_offset": 2048, 00:14:43.634 "data_size": 63488 00:14:43.634 }, 00:14:43.634 { 00:14:43.634 "name": "BaseBdev4", 00:14:43.634 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:43.634 "is_configured": true, 00:14:43.634 "data_offset": 2048, 00:14:43.634 "data_size": 63488 00:14:43.634 } 00:14:43.634 ] 00:14:43.634 }' 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=460 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.634 "name": "raid_bdev1", 00:14:43.634 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:43.634 "strip_size_kb": 0, 00:14:43.634 "state": "online", 00:14:43.634 "raid_level": "raid1", 00:14:43.634 "superblock": true, 00:14:43.634 "num_base_bdevs": 4, 00:14:43.634 "num_base_bdevs_discovered": 3, 00:14:43.634 "num_base_bdevs_operational": 3, 00:14:43.634 "process": { 00:14:43.634 "type": "rebuild", 00:14:43.634 "target": "spare", 00:14:43.634 "progress": { 00:14:43.634 "blocks": 26624, 00:14:43.634 "percent": 41 00:14:43.634 } 00:14:43.634 }, 00:14:43.634 "base_bdevs_list": [ 00:14:43.634 { 00:14:43.634 "name": "spare", 00:14:43.634 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:43.634 "is_configured": true, 00:14:43.634 "data_offset": 2048, 00:14:43.634 "data_size": 63488 00:14:43.634 }, 00:14:43.634 { 00:14:43.634 "name": null, 00:14:43.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.634 "is_configured": false, 00:14:43.634 "data_offset": 0, 00:14:43.634 "data_size": 63488 00:14:43.634 }, 00:14:43.634 { 00:14:43.634 "name": "BaseBdev3", 00:14:43.634 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:43.634 "is_configured": true, 00:14:43.634 "data_offset": 2048, 00:14:43.634 "data_size": 63488 00:14:43.634 }, 00:14:43.634 { 00:14:43.634 "name": "BaseBdev4", 00:14:43.634 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:43.634 "is_configured": true, 00:14:43.634 "data_offset": 2048, 00:14:43.634 "data_size": 63488 00:14:43.634 } 00:14:43.634 ] 00:14:43.634 }' 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.634 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.015 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.015 "name": "raid_bdev1", 00:14:45.015 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:45.015 "strip_size_kb": 0, 00:14:45.015 "state": "online", 00:14:45.015 "raid_level": "raid1", 00:14:45.015 "superblock": true, 00:14:45.015 "num_base_bdevs": 4, 00:14:45.015 "num_base_bdevs_discovered": 3, 00:14:45.015 "num_base_bdevs_operational": 3, 00:14:45.015 "process": { 00:14:45.015 "type": "rebuild", 00:14:45.015 "target": "spare", 00:14:45.015 "progress": { 00:14:45.015 "blocks": 49152, 00:14:45.015 "percent": 77 00:14:45.015 } 00:14:45.015 }, 00:14:45.015 "base_bdevs_list": [ 00:14:45.015 { 00:14:45.015 "name": "spare", 00:14:45.015 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:45.015 "is_configured": true, 00:14:45.015 "data_offset": 2048, 00:14:45.015 "data_size": 63488 00:14:45.015 }, 00:14:45.015 { 00:14:45.015 "name": null, 00:14:45.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.015 "is_configured": false, 00:14:45.015 "data_offset": 0, 00:14:45.016 "data_size": 63488 00:14:45.016 }, 00:14:45.016 { 00:14:45.016 "name": "BaseBdev3", 00:14:45.016 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:45.016 "is_configured": true, 00:14:45.016 "data_offset": 2048, 00:14:45.016 "data_size": 63488 00:14:45.016 }, 00:14:45.016 { 00:14:45.016 "name": "BaseBdev4", 00:14:45.016 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:45.016 "is_configured": true, 00:14:45.016 "data_offset": 2048, 00:14:45.016 "data_size": 63488 00:14:45.016 } 00:14:45.016 ] 00:14:45.016 }' 00:14:45.016 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.016 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.016 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.016 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.016 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.277 [2024-09-30 12:31:57.171302] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:45.277 [2024-09-30 12:31:57.171391] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:45.277 [2024-09-30 12:31:57.171497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.846 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.846 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.846 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.846 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.846 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.846 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.846 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.846 12:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.846 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.846 12:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.846 12:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.106 "name": "raid_bdev1", 00:14:46.106 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:46.106 "strip_size_kb": 0, 00:14:46.106 "state": "online", 00:14:46.106 "raid_level": "raid1", 00:14:46.106 "superblock": true, 00:14:46.106 "num_base_bdevs": 4, 00:14:46.106 "num_base_bdevs_discovered": 3, 00:14:46.106 "num_base_bdevs_operational": 3, 00:14:46.106 "base_bdevs_list": [ 00:14:46.106 { 00:14:46.106 "name": "spare", 00:14:46.106 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:46.106 "is_configured": true, 00:14:46.106 "data_offset": 2048, 00:14:46.106 "data_size": 63488 00:14:46.106 }, 00:14:46.106 { 00:14:46.106 "name": null, 00:14:46.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.106 "is_configured": false, 00:14:46.106 "data_offset": 0, 00:14:46.106 "data_size": 63488 00:14:46.106 }, 00:14:46.106 { 00:14:46.106 "name": "BaseBdev3", 00:14:46.106 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:46.106 "is_configured": true, 00:14:46.106 "data_offset": 2048, 00:14:46.106 "data_size": 63488 00:14:46.106 }, 00:14:46.106 { 00:14:46.106 "name": "BaseBdev4", 00:14:46.106 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:46.106 "is_configured": true, 00:14:46.106 "data_offset": 2048, 00:14:46.106 "data_size": 63488 00:14:46.106 } 00:14:46.106 ] 00:14:46.106 }' 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.106 "name": "raid_bdev1", 00:14:46.106 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:46.106 "strip_size_kb": 0, 00:14:46.106 "state": "online", 00:14:46.106 "raid_level": "raid1", 00:14:46.106 "superblock": true, 00:14:46.106 "num_base_bdevs": 4, 00:14:46.106 "num_base_bdevs_discovered": 3, 00:14:46.106 "num_base_bdevs_operational": 3, 00:14:46.106 "base_bdevs_list": [ 00:14:46.106 { 00:14:46.106 "name": "spare", 00:14:46.106 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:46.106 "is_configured": true, 00:14:46.106 "data_offset": 2048, 00:14:46.106 "data_size": 63488 00:14:46.106 }, 00:14:46.106 { 00:14:46.106 "name": null, 00:14:46.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.106 "is_configured": false, 00:14:46.106 "data_offset": 0, 00:14:46.106 "data_size": 63488 00:14:46.106 }, 00:14:46.106 { 00:14:46.106 "name": "BaseBdev3", 00:14:46.106 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:46.106 "is_configured": true, 00:14:46.106 "data_offset": 2048, 00:14:46.106 "data_size": 63488 00:14:46.106 }, 00:14:46.106 { 00:14:46.106 "name": "BaseBdev4", 00:14:46.106 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:46.106 "is_configured": true, 00:14:46.106 "data_offset": 2048, 00:14:46.106 "data_size": 63488 00:14:46.106 } 00:14:46.106 ] 00:14:46.106 }' 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.106 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.367 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.367 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.367 "name": "raid_bdev1", 00:14:46.367 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:46.367 "strip_size_kb": 0, 00:14:46.367 "state": "online", 00:14:46.367 "raid_level": "raid1", 00:14:46.367 "superblock": true, 00:14:46.367 "num_base_bdevs": 4, 00:14:46.367 "num_base_bdevs_discovered": 3, 00:14:46.367 "num_base_bdevs_operational": 3, 00:14:46.367 "base_bdevs_list": [ 00:14:46.367 { 00:14:46.367 "name": "spare", 00:14:46.367 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:46.367 "is_configured": true, 00:14:46.367 "data_offset": 2048, 00:14:46.367 "data_size": 63488 00:14:46.367 }, 00:14:46.367 { 00:14:46.367 "name": null, 00:14:46.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.367 "is_configured": false, 00:14:46.367 "data_offset": 0, 00:14:46.367 "data_size": 63488 00:14:46.367 }, 00:14:46.367 { 00:14:46.367 "name": "BaseBdev3", 00:14:46.367 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:46.367 "is_configured": true, 00:14:46.367 "data_offset": 2048, 00:14:46.367 "data_size": 63488 00:14:46.367 }, 00:14:46.367 { 00:14:46.367 "name": "BaseBdev4", 00:14:46.367 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:46.367 "is_configured": true, 00:14:46.367 "data_offset": 2048, 00:14:46.367 "data_size": 63488 00:14:46.367 } 00:14:46.367 ] 00:14:46.367 }' 00:14:46.367 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.367 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.627 [2024-09-30 12:31:58.436050] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.627 [2024-09-30 12:31:58.436081] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.627 [2024-09-30 12:31:58.436150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.627 [2024-09-30 12:31:58.436213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.627 [2024-09-30 12:31:58.436229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:46.627 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:46.887 /dev/nbd0 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.887 1+0 records in 00:14:46.887 1+0 records out 00:14:46.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258594 s, 15.8 MB/s 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:46.887 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:47.147 /dev/nbd1 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.147 1+0 records in 00:14:47.147 1+0 records out 00:14:47.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381197 s, 10.7 MB/s 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:47.147 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.407 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.667 [2024-09-30 12:31:59.513753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:47.667 [2024-09-30 12:31:59.513823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.667 [2024-09-30 12:31:59.513846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:47.667 [2024-09-30 12:31:59.513855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.667 [2024-09-30 12:31:59.515914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.667 [2024-09-30 12:31:59.515955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:47.667 [2024-09-30 12:31:59.516057] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:47.667 [2024-09-30 12:31:59.516106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.667 [2024-09-30 12:31:59.516235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.667 [2024-09-30 12:31:59.516343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:47.667 spare 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.667 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.927 [2024-09-30 12:31:59.616230] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:47.927 [2024-09-30 12:31:59.616257] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:47.927 [2024-09-30 12:31:59.616543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:47.927 [2024-09-30 12:31:59.616720] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:47.927 [2024-09-30 12:31:59.616755] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:47.927 [2024-09-30 12:31:59.616913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.927 "name": "raid_bdev1", 00:14:47.927 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:47.927 "strip_size_kb": 0, 00:14:47.927 "state": "online", 00:14:47.927 "raid_level": "raid1", 00:14:47.927 "superblock": true, 00:14:47.927 "num_base_bdevs": 4, 00:14:47.927 "num_base_bdevs_discovered": 3, 00:14:47.927 "num_base_bdevs_operational": 3, 00:14:47.927 "base_bdevs_list": [ 00:14:47.927 { 00:14:47.927 "name": "spare", 00:14:47.927 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:47.927 "is_configured": true, 00:14:47.927 "data_offset": 2048, 00:14:47.927 "data_size": 63488 00:14:47.927 }, 00:14:47.927 { 00:14:47.927 "name": null, 00:14:47.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.927 "is_configured": false, 00:14:47.927 "data_offset": 2048, 00:14:47.927 "data_size": 63488 00:14:47.927 }, 00:14:47.927 { 00:14:47.927 "name": "BaseBdev3", 00:14:47.927 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:47.927 "is_configured": true, 00:14:47.927 "data_offset": 2048, 00:14:47.927 "data_size": 63488 00:14:47.927 }, 00:14:47.927 { 00:14:47.927 "name": "BaseBdev4", 00:14:47.927 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:47.927 "is_configured": true, 00:14:47.927 "data_offset": 2048, 00:14:47.927 "data_size": 63488 00:14:47.927 } 00:14:47.927 ] 00:14:47.927 }' 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.927 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.187 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.187 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.187 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.187 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.187 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.187 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.187 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.187 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.187 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.447 "name": "raid_bdev1", 00:14:48.447 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:48.447 "strip_size_kb": 0, 00:14:48.447 "state": "online", 00:14:48.447 "raid_level": "raid1", 00:14:48.447 "superblock": true, 00:14:48.447 "num_base_bdevs": 4, 00:14:48.447 "num_base_bdevs_discovered": 3, 00:14:48.447 "num_base_bdevs_operational": 3, 00:14:48.447 "base_bdevs_list": [ 00:14:48.447 { 00:14:48.447 "name": "spare", 00:14:48.447 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:48.447 "is_configured": true, 00:14:48.447 "data_offset": 2048, 00:14:48.447 "data_size": 63488 00:14:48.447 }, 00:14:48.447 { 00:14:48.447 "name": null, 00:14:48.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.447 "is_configured": false, 00:14:48.447 "data_offset": 2048, 00:14:48.447 "data_size": 63488 00:14:48.447 }, 00:14:48.447 { 00:14:48.447 "name": "BaseBdev3", 00:14:48.447 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:48.447 "is_configured": true, 00:14:48.447 "data_offset": 2048, 00:14:48.447 "data_size": 63488 00:14:48.447 }, 00:14:48.447 { 00:14:48.447 "name": "BaseBdev4", 00:14:48.447 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:48.447 "is_configured": true, 00:14:48.447 "data_offset": 2048, 00:14:48.447 "data_size": 63488 00:14:48.447 } 00:14:48.447 ] 00:14:48.447 }' 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.447 [2024-09-30 12:32:00.228603] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.447 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.447 "name": "raid_bdev1", 00:14:48.447 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:48.447 "strip_size_kb": 0, 00:14:48.447 "state": "online", 00:14:48.447 "raid_level": "raid1", 00:14:48.447 "superblock": true, 00:14:48.447 "num_base_bdevs": 4, 00:14:48.447 "num_base_bdevs_discovered": 2, 00:14:48.447 "num_base_bdevs_operational": 2, 00:14:48.447 "base_bdevs_list": [ 00:14:48.447 { 00:14:48.447 "name": null, 00:14:48.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.447 "is_configured": false, 00:14:48.447 "data_offset": 0, 00:14:48.447 "data_size": 63488 00:14:48.447 }, 00:14:48.447 { 00:14:48.447 "name": null, 00:14:48.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.448 "is_configured": false, 00:14:48.448 "data_offset": 2048, 00:14:48.448 "data_size": 63488 00:14:48.448 }, 00:14:48.448 { 00:14:48.448 "name": "BaseBdev3", 00:14:48.448 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:48.448 "is_configured": true, 00:14:48.448 "data_offset": 2048, 00:14:48.448 "data_size": 63488 00:14:48.448 }, 00:14:48.448 { 00:14:48.448 "name": "BaseBdev4", 00:14:48.448 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:48.448 "is_configured": true, 00:14:48.448 "data_offset": 2048, 00:14:48.448 "data_size": 63488 00:14:48.448 } 00:14:48.448 ] 00:14:48.448 }' 00:14:48.448 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.448 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.017 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.017 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.017 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.017 [2024-09-30 12:32:00.639876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.017 [2024-09-30 12:32:00.640013] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:49.017 [2024-09-30 12:32:00.640035] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:49.017 [2024-09-30 12:32:00.640070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.017 [2024-09-30 12:32:00.653325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:49.017 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.017 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:49.017 [2024-09-30 12:32:00.655038] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.956 "name": "raid_bdev1", 00:14:49.956 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:49.956 "strip_size_kb": 0, 00:14:49.956 "state": "online", 00:14:49.956 "raid_level": "raid1", 00:14:49.956 "superblock": true, 00:14:49.956 "num_base_bdevs": 4, 00:14:49.956 "num_base_bdevs_discovered": 3, 00:14:49.956 "num_base_bdevs_operational": 3, 00:14:49.956 "process": { 00:14:49.956 "type": "rebuild", 00:14:49.956 "target": "spare", 00:14:49.956 "progress": { 00:14:49.956 "blocks": 20480, 00:14:49.956 "percent": 32 00:14:49.956 } 00:14:49.956 }, 00:14:49.956 "base_bdevs_list": [ 00:14:49.956 { 00:14:49.956 "name": "spare", 00:14:49.956 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:49.956 "is_configured": true, 00:14:49.956 "data_offset": 2048, 00:14:49.956 "data_size": 63488 00:14:49.956 }, 00:14:49.956 { 00:14:49.956 "name": null, 00:14:49.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.956 "is_configured": false, 00:14:49.956 "data_offset": 2048, 00:14:49.956 "data_size": 63488 00:14:49.956 }, 00:14:49.956 { 00:14:49.956 "name": "BaseBdev3", 00:14:49.956 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:49.956 "is_configured": true, 00:14:49.956 "data_offset": 2048, 00:14:49.956 "data_size": 63488 00:14:49.956 }, 00:14:49.956 { 00:14:49.956 "name": "BaseBdev4", 00:14:49.956 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:49.956 "is_configured": true, 00:14:49.956 "data_offset": 2048, 00:14:49.956 "data_size": 63488 00:14:49.956 } 00:14:49.956 ] 00:14:49.956 }' 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.956 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.956 [2024-09-30 12:32:01.819225] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.216 [2024-09-30 12:32:01.859623] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:50.216 [2024-09-30 12:32:01.859679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.216 [2024-09-30 12:32:01.859696] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.216 [2024-09-30 12:32:01.859703] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.216 "name": "raid_bdev1", 00:14:50.216 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:50.216 "strip_size_kb": 0, 00:14:50.216 "state": "online", 00:14:50.216 "raid_level": "raid1", 00:14:50.216 "superblock": true, 00:14:50.216 "num_base_bdevs": 4, 00:14:50.216 "num_base_bdevs_discovered": 2, 00:14:50.216 "num_base_bdevs_operational": 2, 00:14:50.216 "base_bdevs_list": [ 00:14:50.216 { 00:14:50.216 "name": null, 00:14:50.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.216 "is_configured": false, 00:14:50.216 "data_offset": 0, 00:14:50.216 "data_size": 63488 00:14:50.216 }, 00:14:50.216 { 00:14:50.216 "name": null, 00:14:50.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.216 "is_configured": false, 00:14:50.216 "data_offset": 2048, 00:14:50.216 "data_size": 63488 00:14:50.216 }, 00:14:50.216 { 00:14:50.216 "name": "BaseBdev3", 00:14:50.216 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:50.216 "is_configured": true, 00:14:50.216 "data_offset": 2048, 00:14:50.216 "data_size": 63488 00:14:50.216 }, 00:14:50.216 { 00:14:50.216 "name": "BaseBdev4", 00:14:50.216 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:50.216 "is_configured": true, 00:14:50.216 "data_offset": 2048, 00:14:50.216 "data_size": 63488 00:14:50.216 } 00:14:50.216 ] 00:14:50.216 }' 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.216 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.476 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:50.476 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.476 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.476 [2024-09-30 12:32:02.314263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:50.477 [2024-09-30 12:32:02.314334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.477 [2024-09-30 12:32:02.314359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:50.477 [2024-09-30 12:32:02.314369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.477 [2024-09-30 12:32:02.314810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.477 [2024-09-30 12:32:02.314837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:50.477 [2024-09-30 12:32:02.314912] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:50.477 [2024-09-30 12:32:02.314924] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:50.477 [2024-09-30 12:32:02.314936] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:50.477 [2024-09-30 12:32:02.314959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.477 [2024-09-30 12:32:02.328051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:50.477 spare 00:14:50.477 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.477 [2024-09-30 12:32:02.329793] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.477 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.860 "name": "raid_bdev1", 00:14:51.860 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:51.860 "strip_size_kb": 0, 00:14:51.860 "state": "online", 00:14:51.860 "raid_level": "raid1", 00:14:51.860 "superblock": true, 00:14:51.860 "num_base_bdevs": 4, 00:14:51.860 "num_base_bdevs_discovered": 3, 00:14:51.860 "num_base_bdevs_operational": 3, 00:14:51.860 "process": { 00:14:51.860 "type": "rebuild", 00:14:51.860 "target": "spare", 00:14:51.860 "progress": { 00:14:51.860 "blocks": 20480, 00:14:51.860 "percent": 32 00:14:51.860 } 00:14:51.860 }, 00:14:51.860 "base_bdevs_list": [ 00:14:51.860 { 00:14:51.860 "name": "spare", 00:14:51.860 "uuid": "49333ea7-11f7-5a2e-a0cf-60de2228d996", 00:14:51.860 "is_configured": true, 00:14:51.860 "data_offset": 2048, 00:14:51.860 "data_size": 63488 00:14:51.860 }, 00:14:51.860 { 00:14:51.860 "name": null, 00:14:51.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.860 "is_configured": false, 00:14:51.860 "data_offset": 2048, 00:14:51.860 "data_size": 63488 00:14:51.860 }, 00:14:51.860 { 00:14:51.860 "name": "BaseBdev3", 00:14:51.860 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:51.860 "is_configured": true, 00:14:51.860 "data_offset": 2048, 00:14:51.860 "data_size": 63488 00:14:51.860 }, 00:14:51.860 { 00:14:51.860 "name": "BaseBdev4", 00:14:51.860 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:51.860 "is_configured": true, 00:14:51.860 "data_offset": 2048, 00:14:51.860 "data_size": 63488 00:14:51.860 } 00:14:51.860 ] 00:14:51.860 }' 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.860 [2024-09-30 12:32:03.481798] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.860 [2024-09-30 12:32:03.534153] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.860 [2024-09-30 12:32:03.534208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.860 [2024-09-30 12:32:03.534223] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.860 [2024-09-30 12:32:03.534231] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.860 "name": "raid_bdev1", 00:14:51.860 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:51.860 "strip_size_kb": 0, 00:14:51.860 "state": "online", 00:14:51.860 "raid_level": "raid1", 00:14:51.860 "superblock": true, 00:14:51.860 "num_base_bdevs": 4, 00:14:51.860 "num_base_bdevs_discovered": 2, 00:14:51.860 "num_base_bdevs_operational": 2, 00:14:51.860 "base_bdevs_list": [ 00:14:51.860 { 00:14:51.860 "name": null, 00:14:51.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.860 "is_configured": false, 00:14:51.860 "data_offset": 0, 00:14:51.860 "data_size": 63488 00:14:51.860 }, 00:14:51.860 { 00:14:51.860 "name": null, 00:14:51.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.860 "is_configured": false, 00:14:51.860 "data_offset": 2048, 00:14:51.860 "data_size": 63488 00:14:51.860 }, 00:14:51.860 { 00:14:51.860 "name": "BaseBdev3", 00:14:51.860 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:51.860 "is_configured": true, 00:14:51.860 "data_offset": 2048, 00:14:51.860 "data_size": 63488 00:14:51.860 }, 00:14:51.860 { 00:14:51.860 "name": "BaseBdev4", 00:14:51.860 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:51.860 "is_configured": true, 00:14:51.860 "data_offset": 2048, 00:14:51.860 "data_size": 63488 00:14:51.860 } 00:14:51.860 ] 00:14:51.860 }' 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.860 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.121 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.121 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.121 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.121 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.121 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.121 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.121 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.121 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.121 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.121 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.121 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.121 "name": "raid_bdev1", 00:14:52.121 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:52.121 "strip_size_kb": 0, 00:14:52.121 "state": "online", 00:14:52.121 "raid_level": "raid1", 00:14:52.121 "superblock": true, 00:14:52.121 "num_base_bdevs": 4, 00:14:52.121 "num_base_bdevs_discovered": 2, 00:14:52.121 "num_base_bdevs_operational": 2, 00:14:52.121 "base_bdevs_list": [ 00:14:52.121 { 00:14:52.121 "name": null, 00:14:52.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.121 "is_configured": false, 00:14:52.121 "data_offset": 0, 00:14:52.121 "data_size": 63488 00:14:52.121 }, 00:14:52.121 { 00:14:52.121 "name": null, 00:14:52.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.121 "is_configured": false, 00:14:52.121 "data_offset": 2048, 00:14:52.121 "data_size": 63488 00:14:52.121 }, 00:14:52.121 { 00:14:52.121 "name": "BaseBdev3", 00:14:52.121 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:52.121 "is_configured": true, 00:14:52.121 "data_offset": 2048, 00:14:52.121 "data_size": 63488 00:14:52.121 }, 00:14:52.121 { 00:14:52.121 "name": "BaseBdev4", 00:14:52.121 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:52.121 "is_configured": true, 00:14:52.121 "data_offset": 2048, 00:14:52.121 "data_size": 63488 00:14:52.121 } 00:14:52.121 ] 00:14:52.121 }' 00:14:52.121 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.381 [2024-09-30 12:32:04.091554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:52.381 [2024-09-30 12:32:04.091623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.381 [2024-09-30 12:32:04.091641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:52.381 [2024-09-30 12:32:04.091652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.381 [2024-09-30 12:32:04.092086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.381 [2024-09-30 12:32:04.092118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:52.381 [2024-09-30 12:32:04.092186] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:52.381 [2024-09-30 12:32:04.092202] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:52.381 [2024-09-30 12:32:04.092210] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:52.381 [2024-09-30 12:32:04.092224] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:52.381 BaseBdev1 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.381 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.320 "name": "raid_bdev1", 00:14:53.320 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:53.320 "strip_size_kb": 0, 00:14:53.320 "state": "online", 00:14:53.320 "raid_level": "raid1", 00:14:53.320 "superblock": true, 00:14:53.320 "num_base_bdevs": 4, 00:14:53.320 "num_base_bdevs_discovered": 2, 00:14:53.320 "num_base_bdevs_operational": 2, 00:14:53.320 "base_bdevs_list": [ 00:14:53.320 { 00:14:53.320 "name": null, 00:14:53.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.320 "is_configured": false, 00:14:53.320 "data_offset": 0, 00:14:53.320 "data_size": 63488 00:14:53.320 }, 00:14:53.320 { 00:14:53.320 "name": null, 00:14:53.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.320 "is_configured": false, 00:14:53.320 "data_offset": 2048, 00:14:53.320 "data_size": 63488 00:14:53.320 }, 00:14:53.320 { 00:14:53.320 "name": "BaseBdev3", 00:14:53.320 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:53.320 "is_configured": true, 00:14:53.320 "data_offset": 2048, 00:14:53.320 "data_size": 63488 00:14:53.320 }, 00:14:53.320 { 00:14:53.320 "name": "BaseBdev4", 00:14:53.320 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:53.320 "is_configured": true, 00:14:53.320 "data_offset": 2048, 00:14:53.320 "data_size": 63488 00:14:53.320 } 00:14:53.320 ] 00:14:53.320 }' 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.320 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.890 "name": "raid_bdev1", 00:14:53.890 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:53.890 "strip_size_kb": 0, 00:14:53.890 "state": "online", 00:14:53.890 "raid_level": "raid1", 00:14:53.890 "superblock": true, 00:14:53.890 "num_base_bdevs": 4, 00:14:53.890 "num_base_bdevs_discovered": 2, 00:14:53.890 "num_base_bdevs_operational": 2, 00:14:53.890 "base_bdevs_list": [ 00:14:53.890 { 00:14:53.890 "name": null, 00:14:53.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.890 "is_configured": false, 00:14:53.890 "data_offset": 0, 00:14:53.890 "data_size": 63488 00:14:53.890 }, 00:14:53.890 { 00:14:53.890 "name": null, 00:14:53.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.890 "is_configured": false, 00:14:53.890 "data_offset": 2048, 00:14:53.890 "data_size": 63488 00:14:53.890 }, 00:14:53.890 { 00:14:53.890 "name": "BaseBdev3", 00:14:53.890 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:53.890 "is_configured": true, 00:14:53.890 "data_offset": 2048, 00:14:53.890 "data_size": 63488 00:14:53.890 }, 00:14:53.890 { 00:14:53.890 "name": "BaseBdev4", 00:14:53.890 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:53.890 "is_configured": true, 00:14:53.890 "data_offset": 2048, 00:14:53.890 "data_size": 63488 00:14:53.890 } 00:14:53.890 ] 00:14:53.890 }' 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.890 [2024-09-30 12:32:05.661325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.890 [2024-09-30 12:32:05.661499] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:53.890 [2024-09-30 12:32:05.661513] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:53.890 request: 00:14:53.890 { 00:14:53.890 "base_bdev": "BaseBdev1", 00:14:53.890 "raid_bdev": "raid_bdev1", 00:14:53.890 "method": "bdev_raid_add_base_bdev", 00:14:53.890 "req_id": 1 00:14:53.890 } 00:14:53.890 Got JSON-RPC error response 00:14:53.890 response: 00:14:53.890 { 00:14:53.890 "code": -22, 00:14:53.890 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:53.890 } 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:53.890 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.828 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.088 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.088 "name": "raid_bdev1", 00:14:55.088 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:55.088 "strip_size_kb": 0, 00:14:55.088 "state": "online", 00:14:55.088 "raid_level": "raid1", 00:14:55.088 "superblock": true, 00:14:55.088 "num_base_bdevs": 4, 00:14:55.088 "num_base_bdevs_discovered": 2, 00:14:55.088 "num_base_bdevs_operational": 2, 00:14:55.088 "base_bdevs_list": [ 00:14:55.088 { 00:14:55.088 "name": null, 00:14:55.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.088 "is_configured": false, 00:14:55.088 "data_offset": 0, 00:14:55.088 "data_size": 63488 00:14:55.088 }, 00:14:55.088 { 00:14:55.088 "name": null, 00:14:55.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.088 "is_configured": false, 00:14:55.088 "data_offset": 2048, 00:14:55.088 "data_size": 63488 00:14:55.088 }, 00:14:55.088 { 00:14:55.088 "name": "BaseBdev3", 00:14:55.088 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:55.088 "is_configured": true, 00:14:55.088 "data_offset": 2048, 00:14:55.088 "data_size": 63488 00:14:55.088 }, 00:14:55.088 { 00:14:55.088 "name": "BaseBdev4", 00:14:55.088 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:55.088 "is_configured": true, 00:14:55.088 "data_offset": 2048, 00:14:55.088 "data_size": 63488 00:14:55.088 } 00:14:55.088 ] 00:14:55.088 }' 00:14:55.088 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.088 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.347 "name": "raid_bdev1", 00:14:55.347 "uuid": "16c0f83d-cd22-4c1a-b9cf-5c23c0bc00ec", 00:14:55.347 "strip_size_kb": 0, 00:14:55.347 "state": "online", 00:14:55.347 "raid_level": "raid1", 00:14:55.347 "superblock": true, 00:14:55.347 "num_base_bdevs": 4, 00:14:55.347 "num_base_bdevs_discovered": 2, 00:14:55.347 "num_base_bdevs_operational": 2, 00:14:55.347 "base_bdevs_list": [ 00:14:55.347 { 00:14:55.347 "name": null, 00:14:55.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.347 "is_configured": false, 00:14:55.347 "data_offset": 0, 00:14:55.347 "data_size": 63488 00:14:55.347 }, 00:14:55.347 { 00:14:55.347 "name": null, 00:14:55.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.347 "is_configured": false, 00:14:55.347 "data_offset": 2048, 00:14:55.347 "data_size": 63488 00:14:55.347 }, 00:14:55.347 { 00:14:55.347 "name": "BaseBdev3", 00:14:55.347 "uuid": "d0b9bc3c-e4de-5a3a-a527-29ff69b805c6", 00:14:55.347 "is_configured": true, 00:14:55.347 "data_offset": 2048, 00:14:55.347 "data_size": 63488 00:14:55.347 }, 00:14:55.347 { 00:14:55.347 "name": "BaseBdev4", 00:14:55.347 "uuid": "b7c008f6-b863-5230-9904-b58feb77b062", 00:14:55.347 "is_configured": true, 00:14:55.347 "data_offset": 2048, 00:14:55.347 "data_size": 63488 00:14:55.347 } 00:14:55.347 ] 00:14:55.347 }' 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.347 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77844 00:14:55.607 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77844 ']' 00:14:55.607 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 77844 00:14:55.607 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:55.607 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.607 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77844 00:14:55.607 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.607 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.607 killing process with pid 77844 00:14:55.607 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77844' 00:14:55.607 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 77844 00:14:55.607 Received shutdown signal, test time was about 60.000000 seconds 00:14:55.607 00:14:55.607 Latency(us) 00:14:55.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.607 =================================================================================================================== 00:14:55.607 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:55.607 [2024-09-30 12:32:07.284922] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.607 [2024-09-30 12:32:07.285020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.607 [2024-09-30 12:32:07.285087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.607 [2024-09-30 12:32:07.285096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:55.607 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 77844 00:14:55.867 [2024-09-30 12:32:07.739690] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.247 12:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:57.247 00:14:57.247 real 0m24.676s 00:14:57.247 user 0m29.107s 00:14:57.247 sys 0m3.753s 00:14:57.247 12:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:57.247 12:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.247 ************************************ 00:14:57.247 END TEST raid_rebuild_test_sb 00:14:57.247 ************************************ 00:14:57.247 12:32:08 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:57.247 12:32:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:57.247 12:32:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.247 12:32:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.247 ************************************ 00:14:57.247 START TEST raid_rebuild_test_io 00:14:57.247 ************************************ 00:14:57.247 12:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:14:57.248 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:57.248 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:57.248 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:57.248 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:57.248 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:57.248 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:57.248 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.248 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:57.248 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78592 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78592 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 78592 ']' 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.248 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.248 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:57.248 Zero copy mechanism will not be used. 00:14:57.248 [2024-09-30 12:32:09.103631] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:57.248 [2024-09-30 12:32:09.103772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78592 ] 00:14:57.507 [2024-09-30 12:32:09.271814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.766 [2024-09-30 12:32:09.461874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.766 [2024-09-30 12:32:09.648550] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.766 [2024-09-30 12:32:09.648584] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.025 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.025 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:14:58.025 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.025 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:58.025 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.025 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.285 BaseBdev1_malloc 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.285 [2024-09-30 12:32:09.946222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:58.285 [2024-09-30 12:32:09.946295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.285 [2024-09-30 12:32:09.946317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:58.285 [2024-09-30 12:32:09.946331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.285 [2024-09-30 12:32:09.948307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.285 [2024-09-30 12:32:09.948347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:58.285 BaseBdev1 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.285 BaseBdev2_malloc 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.285 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.285 [2024-09-30 12:32:10.005412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:58.285 [2024-09-30 12:32:10.005470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.285 [2024-09-30 12:32:10.005487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:58.285 [2024-09-30 12:32:10.005498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.285 [2024-09-30 12:32:10.007357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.285 [2024-09-30 12:32:10.007392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:58.285 BaseBdev2 00:14:58.285 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.285 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.285 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:58.285 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.285 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.285 BaseBdev3_malloc 00:14:58.285 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.285 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:58.285 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 [2024-09-30 12:32:10.058786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:58.286 [2024-09-30 12:32:10.058850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.286 [2024-09-30 12:32:10.058869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:58.286 [2024-09-30 12:32:10.058879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.286 [2024-09-30 12:32:10.060783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.286 [2024-09-30 12:32:10.060821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:58.286 BaseBdev3 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 BaseBdev4_malloc 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 [2024-09-30 12:32:10.106786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:58.286 [2024-09-30 12:32:10.106853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.286 [2024-09-30 12:32:10.106870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:58.286 [2024-09-30 12:32:10.106880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.286 [2024-09-30 12:32:10.108834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.286 [2024-09-30 12:32:10.108877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:58.286 BaseBdev4 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 spare_malloc 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 spare_delay 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 [2024-09-30 12:32:10.166677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:58.286 [2024-09-30 12:32:10.166731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.286 [2024-09-30 12:32:10.166758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:58.286 [2024-09-30 12:32:10.166769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.286 [2024-09-30 12:32:10.168692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.286 [2024-09-30 12:32:10.168729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:58.286 spare 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.286 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 [2024-09-30 12:32:10.178714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.546 [2024-09-30 12:32:10.180407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.546 [2024-09-30 12:32:10.180491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.546 [2024-09-30 12:32:10.180543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:58.546 [2024-09-30 12:32:10.180615] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:58.546 [2024-09-30 12:32:10.180625] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:58.546 [2024-09-30 12:32:10.180875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:58.546 [2024-09-30 12:32:10.181044] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:58.546 [2024-09-30 12:32:10.181062] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:58.546 [2024-09-30 12:32:10.181206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.546 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.546 "name": "raid_bdev1", 00:14:58.546 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:14:58.546 "strip_size_kb": 0, 00:14:58.546 "state": "online", 00:14:58.546 "raid_level": "raid1", 00:14:58.546 "superblock": false, 00:14:58.546 "num_base_bdevs": 4, 00:14:58.546 "num_base_bdevs_discovered": 4, 00:14:58.546 "num_base_bdevs_operational": 4, 00:14:58.546 "base_bdevs_list": [ 00:14:58.546 { 00:14:58.546 "name": "BaseBdev1", 00:14:58.547 "uuid": "f00e79ad-a5a7-52f6-a4e7-461aeb4770af", 00:14:58.547 "is_configured": true, 00:14:58.547 "data_offset": 0, 00:14:58.547 "data_size": 65536 00:14:58.547 }, 00:14:58.547 { 00:14:58.547 "name": "BaseBdev2", 00:14:58.547 "uuid": "5d7966f1-49bf-561b-95d3-1d3e903c72db", 00:14:58.547 "is_configured": true, 00:14:58.547 "data_offset": 0, 00:14:58.547 "data_size": 65536 00:14:58.547 }, 00:14:58.547 { 00:14:58.547 "name": "BaseBdev3", 00:14:58.547 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:14:58.547 "is_configured": true, 00:14:58.547 "data_offset": 0, 00:14:58.547 "data_size": 65536 00:14:58.547 }, 00:14:58.547 { 00:14:58.547 "name": "BaseBdev4", 00:14:58.547 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:14:58.547 "is_configured": true, 00:14:58.547 "data_offset": 0, 00:14:58.547 "data_size": 65536 00:14:58.547 } 00:14:58.547 ] 00:14:58.547 }' 00:14:58.547 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.547 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.806 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.806 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:58.806 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.806 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.806 [2024-09-30 12:32:10.678121] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.067 [2024-09-30 12:32:10.777642] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.067 "name": "raid_bdev1", 00:14:59.067 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:14:59.067 "strip_size_kb": 0, 00:14:59.067 "state": "online", 00:14:59.067 "raid_level": "raid1", 00:14:59.067 "superblock": false, 00:14:59.067 "num_base_bdevs": 4, 00:14:59.067 "num_base_bdevs_discovered": 3, 00:14:59.067 "num_base_bdevs_operational": 3, 00:14:59.067 "base_bdevs_list": [ 00:14:59.067 { 00:14:59.067 "name": null, 00:14:59.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.067 "is_configured": false, 00:14:59.067 "data_offset": 0, 00:14:59.067 "data_size": 65536 00:14:59.067 }, 00:14:59.067 { 00:14:59.067 "name": "BaseBdev2", 00:14:59.067 "uuid": "5d7966f1-49bf-561b-95d3-1d3e903c72db", 00:14:59.067 "is_configured": true, 00:14:59.067 "data_offset": 0, 00:14:59.067 "data_size": 65536 00:14:59.067 }, 00:14:59.067 { 00:14:59.067 "name": "BaseBdev3", 00:14:59.067 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:14:59.067 "is_configured": true, 00:14:59.067 "data_offset": 0, 00:14:59.067 "data_size": 65536 00:14:59.067 }, 00:14:59.067 { 00:14:59.067 "name": "BaseBdev4", 00:14:59.067 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:14:59.067 "is_configured": true, 00:14:59.067 "data_offset": 0, 00:14:59.067 "data_size": 65536 00:14:59.067 } 00:14:59.067 ] 00:14:59.067 }' 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.067 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.067 [2024-09-30 12:32:10.873572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:59.067 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:59.067 Zero copy mechanism will not be used. 00:14:59.067 Running I/O for 60 seconds... 00:14:59.327 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.327 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.327 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.327 [2024-09-30 12:32:11.200820] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.587 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.587 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:59.587 [2024-09-30 12:32:11.266801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:59.587 [2024-09-30 12:32:11.268716] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:59.587 [2024-09-30 12:32:11.377817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:59.587 [2024-09-30 12:32:11.378974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:59.848 [2024-09-30 12:32:11.600766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:59.848 [2024-09-30 12:32:11.601401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:00.367 192.00 IOPS, 576.00 MiB/s [2024-09-30 12:32:12.085147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:00.367 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.367 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.367 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.367 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.367 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.367 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.367 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.367 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.367 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.627 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.627 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.627 "name": "raid_bdev1", 00:15:00.627 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:00.627 "strip_size_kb": 0, 00:15:00.627 "state": "online", 00:15:00.627 "raid_level": "raid1", 00:15:00.627 "superblock": false, 00:15:00.627 "num_base_bdevs": 4, 00:15:00.627 "num_base_bdevs_discovered": 4, 00:15:00.627 "num_base_bdevs_operational": 4, 00:15:00.628 "process": { 00:15:00.628 "type": "rebuild", 00:15:00.628 "target": "spare", 00:15:00.628 "progress": { 00:15:00.628 "blocks": 12288, 00:15:00.628 "percent": 18 00:15:00.628 } 00:15:00.628 }, 00:15:00.628 "base_bdevs_list": [ 00:15:00.628 { 00:15:00.628 "name": "spare", 00:15:00.628 "uuid": "d29fc59f-7a29-514e-aae2-15a3e23f3b14", 00:15:00.628 "is_configured": true, 00:15:00.628 "data_offset": 0, 00:15:00.628 "data_size": 65536 00:15:00.628 }, 00:15:00.628 { 00:15:00.628 "name": "BaseBdev2", 00:15:00.628 "uuid": "5d7966f1-49bf-561b-95d3-1d3e903c72db", 00:15:00.628 "is_configured": true, 00:15:00.628 "data_offset": 0, 00:15:00.628 "data_size": 65536 00:15:00.628 }, 00:15:00.628 { 00:15:00.628 "name": "BaseBdev3", 00:15:00.628 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:00.628 "is_configured": true, 00:15:00.628 "data_offset": 0, 00:15:00.628 "data_size": 65536 00:15:00.628 }, 00:15:00.628 { 00:15:00.628 "name": "BaseBdev4", 00:15:00.628 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:00.628 "is_configured": true, 00:15:00.628 "data_offset": 0, 00:15:00.628 "data_size": 65536 00:15:00.628 } 00:15:00.628 ] 00:15:00.628 }' 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.628 [2024-09-30 12:32:12.404758] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.628 [2024-09-30 12:32:12.436359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:00.628 [2024-09-30 12:32:12.436584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:00.628 [2024-09-30 12:32:12.437684] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:00.628 [2024-09-30 12:32:12.445403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.628 [2024-09-30 12:32:12.445444] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.628 [2024-09-30 12:32:12.445458] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:00.628 [2024-09-30 12:32:12.466857] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.628 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.888 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.888 "name": "raid_bdev1", 00:15:00.888 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:00.888 "strip_size_kb": 0, 00:15:00.888 "state": "online", 00:15:00.888 "raid_level": "raid1", 00:15:00.888 "superblock": false, 00:15:00.888 "num_base_bdevs": 4, 00:15:00.888 "num_base_bdevs_discovered": 3, 00:15:00.888 "num_base_bdevs_operational": 3, 00:15:00.888 "base_bdevs_list": [ 00:15:00.888 { 00:15:00.888 "name": null, 00:15:00.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.888 "is_configured": false, 00:15:00.888 "data_offset": 0, 00:15:00.888 "data_size": 65536 00:15:00.888 }, 00:15:00.888 { 00:15:00.888 "name": "BaseBdev2", 00:15:00.888 "uuid": "5d7966f1-49bf-561b-95d3-1d3e903c72db", 00:15:00.888 "is_configured": true, 00:15:00.888 "data_offset": 0, 00:15:00.888 "data_size": 65536 00:15:00.888 }, 00:15:00.888 { 00:15:00.888 "name": "BaseBdev3", 00:15:00.888 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:00.888 "is_configured": true, 00:15:00.888 "data_offset": 0, 00:15:00.888 "data_size": 65536 00:15:00.888 }, 00:15:00.888 { 00:15:00.888 "name": "BaseBdev4", 00:15:00.888 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:00.888 "is_configured": true, 00:15:00.888 "data_offset": 0, 00:15:00.888 "data_size": 65536 00:15:00.888 } 00:15:00.888 ] 00:15:00.888 }' 00:15:00.888 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.888 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.155 183.50 IOPS, 550.50 MiB/s 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:01.155 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.155 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:01.155 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:01.155 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.155 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.155 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.155 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.155 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.155 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.155 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.155 "name": "raid_bdev1", 00:15:01.155 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:01.155 "strip_size_kb": 0, 00:15:01.155 "state": "online", 00:15:01.155 "raid_level": "raid1", 00:15:01.155 "superblock": false, 00:15:01.155 "num_base_bdevs": 4, 00:15:01.155 "num_base_bdevs_discovered": 3, 00:15:01.155 "num_base_bdevs_operational": 3, 00:15:01.155 "base_bdevs_list": [ 00:15:01.155 { 00:15:01.155 "name": null, 00:15:01.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.155 "is_configured": false, 00:15:01.155 "data_offset": 0, 00:15:01.155 "data_size": 65536 00:15:01.155 }, 00:15:01.155 { 00:15:01.155 "name": "BaseBdev2", 00:15:01.155 "uuid": "5d7966f1-49bf-561b-95d3-1d3e903c72db", 00:15:01.155 "is_configured": true, 00:15:01.155 "data_offset": 0, 00:15:01.155 "data_size": 65536 00:15:01.155 }, 00:15:01.155 { 00:15:01.155 "name": "BaseBdev3", 00:15:01.155 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:01.155 "is_configured": true, 00:15:01.155 "data_offset": 0, 00:15:01.155 "data_size": 65536 00:15:01.155 }, 00:15:01.155 { 00:15:01.155 "name": "BaseBdev4", 00:15:01.155 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:01.155 "is_configured": true, 00:15:01.155 "data_offset": 0, 00:15:01.155 "data_size": 65536 00:15:01.155 } 00:15:01.155 ] 00:15:01.155 }' 00:15:01.156 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.416 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:01.416 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.416 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:01.416 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.416 12:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.416 12:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.416 [2024-09-30 12:32:13.120635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.416 12:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.416 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:01.416 [2024-09-30 12:32:13.174689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:01.416 [2024-09-30 12:32:13.176592] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.416 [2024-09-30 12:32:13.297763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:01.416 [2024-09-30 12:32:13.299032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:01.676 [2024-09-30 12:32:13.530019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:01.676 [2024-09-30 12:32:13.530361] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:01.936 [2024-09-30 12:32:13.767300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:01.936 [2024-09-30 12:32:13.768563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:02.196 176.67 IOPS, 530.00 MiB/s [2024-09-30 12:32:13.985485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:02.196 [2024-09-30 12:32:13.985821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.456 "name": "raid_bdev1", 00:15:02.456 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:02.456 "strip_size_kb": 0, 00:15:02.456 "state": "online", 00:15:02.456 "raid_level": "raid1", 00:15:02.456 "superblock": false, 00:15:02.456 "num_base_bdevs": 4, 00:15:02.456 "num_base_bdevs_discovered": 4, 00:15:02.456 "num_base_bdevs_operational": 4, 00:15:02.456 "process": { 00:15:02.456 "type": "rebuild", 00:15:02.456 "target": "spare", 00:15:02.456 "progress": { 00:15:02.456 "blocks": 10240, 00:15:02.456 "percent": 15 00:15:02.456 } 00:15:02.456 }, 00:15:02.456 "base_bdevs_list": [ 00:15:02.456 { 00:15:02.456 "name": "spare", 00:15:02.456 "uuid": "d29fc59f-7a29-514e-aae2-15a3e23f3b14", 00:15:02.456 "is_configured": true, 00:15:02.456 "data_offset": 0, 00:15:02.456 "data_size": 65536 00:15:02.456 }, 00:15:02.456 { 00:15:02.456 "name": "BaseBdev2", 00:15:02.456 "uuid": "5d7966f1-49bf-561b-95d3-1d3e903c72db", 00:15:02.456 "is_configured": true, 00:15:02.456 "data_offset": 0, 00:15:02.456 "data_size": 65536 00:15:02.456 }, 00:15:02.456 { 00:15:02.456 "name": "BaseBdev3", 00:15:02.456 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:02.456 "is_configured": true, 00:15:02.456 "data_offset": 0, 00:15:02.456 "data_size": 65536 00:15:02.456 }, 00:15:02.456 { 00:15:02.456 "name": "BaseBdev4", 00:15:02.456 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:02.456 "is_configured": true, 00:15:02.456 "data_offset": 0, 00:15:02.456 "data_size": 65536 00:15:02.456 } 00:15:02.456 ] 00:15:02.456 }' 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.456 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.456 [2024-09-30 12:32:14.295572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.456 [2024-09-30 12:32:14.323310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:02.717 [2024-09-30 12:32:14.432112] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:02.717 [2024-09-30 12:32:14.432149] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:02.717 [2024-09-30 12:32:14.433193] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:02.717 [2024-09-30 12:32:14.438730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.717 "name": "raid_bdev1", 00:15:02.717 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:02.717 "strip_size_kb": 0, 00:15:02.717 "state": "online", 00:15:02.717 "raid_level": "raid1", 00:15:02.717 "superblock": false, 00:15:02.717 "num_base_bdevs": 4, 00:15:02.717 "num_base_bdevs_discovered": 3, 00:15:02.717 "num_base_bdevs_operational": 3, 00:15:02.717 "process": { 00:15:02.717 "type": "rebuild", 00:15:02.717 "target": "spare", 00:15:02.717 "progress": { 00:15:02.717 "blocks": 14336, 00:15:02.717 "percent": 21 00:15:02.717 } 00:15:02.717 }, 00:15:02.717 "base_bdevs_list": [ 00:15:02.717 { 00:15:02.717 "name": "spare", 00:15:02.717 "uuid": "d29fc59f-7a29-514e-aae2-15a3e23f3b14", 00:15:02.717 "is_configured": true, 00:15:02.717 "data_offset": 0, 00:15:02.717 "data_size": 65536 00:15:02.717 }, 00:15:02.717 { 00:15:02.717 "name": null, 00:15:02.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.717 "is_configured": false, 00:15:02.717 "data_offset": 0, 00:15:02.717 "data_size": 65536 00:15:02.717 }, 00:15:02.717 { 00:15:02.717 "name": "BaseBdev3", 00:15:02.717 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:02.717 "is_configured": true, 00:15:02.717 "data_offset": 0, 00:15:02.717 "data_size": 65536 00:15:02.717 }, 00:15:02.717 { 00:15:02.717 "name": "BaseBdev4", 00:15:02.717 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:02.717 "is_configured": true, 00:15:02.717 "data_offset": 0, 00:15:02.717 "data_size": 65536 00:15:02.717 } 00:15:02.717 ] 00:15:02.717 }' 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=479 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.717 "name": "raid_bdev1", 00:15:02.717 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:02.717 "strip_size_kb": 0, 00:15:02.717 "state": "online", 00:15:02.717 "raid_level": "raid1", 00:15:02.717 "superblock": false, 00:15:02.717 "num_base_bdevs": 4, 00:15:02.717 "num_base_bdevs_discovered": 3, 00:15:02.717 "num_base_bdevs_operational": 3, 00:15:02.717 "process": { 00:15:02.717 "type": "rebuild", 00:15:02.717 "target": "spare", 00:15:02.717 "progress": { 00:15:02.717 "blocks": 14336, 00:15:02.717 "percent": 21 00:15:02.717 } 00:15:02.717 }, 00:15:02.717 "base_bdevs_list": [ 00:15:02.717 { 00:15:02.717 "name": "spare", 00:15:02.717 "uuid": "d29fc59f-7a29-514e-aae2-15a3e23f3b14", 00:15:02.717 "is_configured": true, 00:15:02.717 "data_offset": 0, 00:15:02.717 "data_size": 65536 00:15:02.717 }, 00:15:02.717 { 00:15:02.717 "name": null, 00:15:02.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.717 "is_configured": false, 00:15:02.717 "data_offset": 0, 00:15:02.717 "data_size": 65536 00:15:02.717 }, 00:15:02.717 { 00:15:02.717 "name": "BaseBdev3", 00:15:02.717 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:02.717 "is_configured": true, 00:15:02.717 "data_offset": 0, 00:15:02.717 "data_size": 65536 00:15:02.717 }, 00:15:02.717 { 00:15:02.717 "name": "BaseBdev4", 00:15:02.717 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:02.717 "is_configured": true, 00:15:02.717 "data_offset": 0, 00:15:02.717 "data_size": 65536 00:15:02.717 } 00:15:02.717 ] 00:15:02.717 }' 00:15:02.717 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.978 [2024-09-30 12:32:14.642060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:02.978 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.978 [2024-09-30 12:32:14.642555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:02.978 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.978 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.978 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.238 155.25 IOPS, 465.75 MiB/s [2024-09-30 12:32:15.086141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:03.238 [2024-09-30 12:32:15.086585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:03.808 [2024-09-30 12:32:15.419385] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:03.808 [2024-09-30 12:32:15.640070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:03.808 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.808 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.808 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.808 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.808 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.808 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.068 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.068 12:32:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.068 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.068 12:32:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.068 12:32:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.068 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.068 "name": "raid_bdev1", 00:15:04.068 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:04.068 "strip_size_kb": 0, 00:15:04.068 "state": "online", 00:15:04.068 "raid_level": "raid1", 00:15:04.068 "superblock": false, 00:15:04.068 "num_base_bdevs": 4, 00:15:04.068 "num_base_bdevs_discovered": 3, 00:15:04.068 "num_base_bdevs_operational": 3, 00:15:04.068 "process": { 00:15:04.068 "type": "rebuild", 00:15:04.068 "target": "spare", 00:15:04.068 "progress": { 00:15:04.068 "blocks": 28672, 00:15:04.068 "percent": 43 00:15:04.068 } 00:15:04.068 }, 00:15:04.068 "base_bdevs_list": [ 00:15:04.068 { 00:15:04.068 "name": "spare", 00:15:04.068 "uuid": "d29fc59f-7a29-514e-aae2-15a3e23f3b14", 00:15:04.068 "is_configured": true, 00:15:04.068 "data_offset": 0, 00:15:04.068 "data_size": 65536 00:15:04.068 }, 00:15:04.068 { 00:15:04.068 "name": null, 00:15:04.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.068 "is_configured": false, 00:15:04.068 "data_offset": 0, 00:15:04.068 "data_size": 65536 00:15:04.068 }, 00:15:04.068 { 00:15:04.068 "name": "BaseBdev3", 00:15:04.068 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:04.068 "is_configured": true, 00:15:04.068 "data_offset": 0, 00:15:04.068 "data_size": 65536 00:15:04.068 }, 00:15:04.068 { 00:15:04.068 "name": "BaseBdev4", 00:15:04.068 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:04.068 "is_configured": true, 00:15:04.068 "data_offset": 0, 00:15:04.068 "data_size": 65536 00:15:04.068 } 00:15:04.068 ] 00:15:04.068 }' 00:15:04.068 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.068 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.068 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.068 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.068 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.068 [2024-09-30 12:32:15.854530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:04.329 137.00 IOPS, 411.00 MiB/s [2024-09-30 12:32:16.094842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:04.588 [2024-09-30 12:32:16.424188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:04.848 [2024-09-30 12:32:16.535073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:04.848 [2024-09-30 12:32:16.535323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:05.107 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.107 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.107 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.107 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.107 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.107 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.107 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.108 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.108 12:32:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.108 12:32:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.108 119.67 IOPS, 359.00 MiB/s 12:32:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.108 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.108 "name": "raid_bdev1", 00:15:05.108 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:05.108 "strip_size_kb": 0, 00:15:05.108 "state": "online", 00:15:05.108 "raid_level": "raid1", 00:15:05.108 "superblock": false, 00:15:05.108 "num_base_bdevs": 4, 00:15:05.108 "num_base_bdevs_discovered": 3, 00:15:05.108 "num_base_bdevs_operational": 3, 00:15:05.108 "process": { 00:15:05.108 "type": "rebuild", 00:15:05.108 "target": "spare", 00:15:05.108 "progress": { 00:15:05.108 "blocks": 45056, 00:15:05.108 "percent": 68 00:15:05.108 } 00:15:05.108 }, 00:15:05.108 "base_bdevs_list": [ 00:15:05.108 { 00:15:05.108 "name": "spare", 00:15:05.108 "uuid": "d29fc59f-7a29-514e-aae2-15a3e23f3b14", 00:15:05.108 "is_configured": true, 00:15:05.108 "data_offset": 0, 00:15:05.108 "data_size": 65536 00:15:05.108 }, 00:15:05.108 { 00:15:05.108 "name": null, 00:15:05.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.108 "is_configured": false, 00:15:05.108 "data_offset": 0, 00:15:05.108 "data_size": 65536 00:15:05.108 }, 00:15:05.108 { 00:15:05.108 "name": "BaseBdev3", 00:15:05.108 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:05.108 "is_configured": true, 00:15:05.108 "data_offset": 0, 00:15:05.108 "data_size": 65536 00:15:05.108 }, 00:15:05.108 { 00:15:05.108 "name": "BaseBdev4", 00:15:05.108 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:05.108 "is_configured": true, 00:15:05.108 "data_offset": 0, 00:15:05.108 "data_size": 65536 00:15:05.108 } 00:15:05.108 ] 00:15:05.108 }' 00:15:05.108 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.108 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.108 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.377 12:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.377 12:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.377 [2024-09-30 12:32:17.161115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:05.377 [2024-09-30 12:32:17.161958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:05.654 [2024-09-30 12:32:17.376007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:06.233 105.57 IOPS, 316.71 MiB/s 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.233 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.233 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.233 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.233 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.233 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.233 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.234 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.234 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.234 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.234 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.234 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.234 "name": "raid_bdev1", 00:15:06.234 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:06.234 "strip_size_kb": 0, 00:15:06.234 "state": "online", 00:15:06.234 "raid_level": "raid1", 00:15:06.234 "superblock": false, 00:15:06.234 "num_base_bdevs": 4, 00:15:06.234 "num_base_bdevs_discovered": 3, 00:15:06.234 "num_base_bdevs_operational": 3, 00:15:06.234 "process": { 00:15:06.234 "type": "rebuild", 00:15:06.234 "target": "spare", 00:15:06.234 "progress": { 00:15:06.234 "blocks": 61440, 00:15:06.234 "percent": 93 00:15:06.234 } 00:15:06.234 }, 00:15:06.234 "base_bdevs_list": [ 00:15:06.234 { 00:15:06.234 "name": "spare", 00:15:06.234 "uuid": "d29fc59f-7a29-514e-aae2-15a3e23f3b14", 00:15:06.234 "is_configured": true, 00:15:06.234 "data_offset": 0, 00:15:06.234 "data_size": 65536 00:15:06.234 }, 00:15:06.234 { 00:15:06.234 "name": null, 00:15:06.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.234 "is_configured": false, 00:15:06.234 "data_offset": 0, 00:15:06.234 "data_size": 65536 00:15:06.234 }, 00:15:06.234 { 00:15:06.234 "name": "BaseBdev3", 00:15:06.234 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:06.234 "is_configured": true, 00:15:06.234 "data_offset": 0, 00:15:06.234 "data_size": 65536 00:15:06.234 }, 00:15:06.234 { 00:15:06.234 "name": "BaseBdev4", 00:15:06.234 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:06.234 "is_configured": true, 00:15:06.234 "data_offset": 0, 00:15:06.234 "data_size": 65536 00:15:06.234 } 00:15:06.234 ] 00:15:06.234 }' 00:15:06.234 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.234 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.234 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.493 [2024-09-30 12:32:18.132730] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:06.493 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.493 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.493 [2024-09-30 12:32:18.232558] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:06.493 [2024-09-30 12:32:18.240101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.328 97.88 IOPS, 293.62 MiB/s 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.328 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.328 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.328 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.328 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.328 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.328 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.328 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.328 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.328 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.328 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.328 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.328 "name": "raid_bdev1", 00:15:07.328 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:07.328 "strip_size_kb": 0, 00:15:07.328 "state": "online", 00:15:07.328 "raid_level": "raid1", 00:15:07.328 "superblock": false, 00:15:07.328 "num_base_bdevs": 4, 00:15:07.328 "num_base_bdevs_discovered": 3, 00:15:07.328 "num_base_bdevs_operational": 3, 00:15:07.328 "base_bdevs_list": [ 00:15:07.328 { 00:15:07.328 "name": "spare", 00:15:07.328 "uuid": "d29fc59f-7a29-514e-aae2-15a3e23f3b14", 00:15:07.328 "is_configured": true, 00:15:07.328 "data_offset": 0, 00:15:07.328 "data_size": 65536 00:15:07.328 }, 00:15:07.328 { 00:15:07.328 "name": null, 00:15:07.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.328 "is_configured": false, 00:15:07.328 "data_offset": 0, 00:15:07.328 "data_size": 65536 00:15:07.328 }, 00:15:07.328 { 00:15:07.328 "name": "BaseBdev3", 00:15:07.328 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:07.328 "is_configured": true, 00:15:07.328 "data_offset": 0, 00:15:07.328 "data_size": 65536 00:15:07.328 }, 00:15:07.328 { 00:15:07.328 "name": "BaseBdev4", 00:15:07.328 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:07.328 "is_configured": true, 00:15:07.328 "data_offset": 0, 00:15:07.328 "data_size": 65536 00:15:07.328 } 00:15:07.328 ] 00:15:07.328 }' 00:15:07.329 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.591 "name": "raid_bdev1", 00:15:07.591 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:07.591 "strip_size_kb": 0, 00:15:07.591 "state": "online", 00:15:07.591 "raid_level": "raid1", 00:15:07.591 "superblock": false, 00:15:07.591 "num_base_bdevs": 4, 00:15:07.591 "num_base_bdevs_discovered": 3, 00:15:07.591 "num_base_bdevs_operational": 3, 00:15:07.591 "base_bdevs_list": [ 00:15:07.591 { 00:15:07.591 "name": "spare", 00:15:07.591 "uuid": "d29fc59f-7a29-514e-aae2-15a3e23f3b14", 00:15:07.591 "is_configured": true, 00:15:07.591 "data_offset": 0, 00:15:07.591 "data_size": 65536 00:15:07.591 }, 00:15:07.591 { 00:15:07.591 "name": null, 00:15:07.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.591 "is_configured": false, 00:15:07.591 "data_offset": 0, 00:15:07.591 "data_size": 65536 00:15:07.591 }, 00:15:07.591 { 00:15:07.591 "name": "BaseBdev3", 00:15:07.591 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:07.591 "is_configured": true, 00:15:07.591 "data_offset": 0, 00:15:07.591 "data_size": 65536 00:15:07.591 }, 00:15:07.591 { 00:15:07.591 "name": "BaseBdev4", 00:15:07.591 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:07.591 "is_configured": true, 00:15:07.591 "data_offset": 0, 00:15:07.591 "data_size": 65536 00:15:07.591 } 00:15:07.591 ] 00:15:07.591 }' 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.591 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.851 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.851 "name": "raid_bdev1", 00:15:07.851 "uuid": "ae4d1c05-4299-4718-af30-f15119f37fa0", 00:15:07.851 "strip_size_kb": 0, 00:15:07.851 "state": "online", 00:15:07.851 "raid_level": "raid1", 00:15:07.851 "superblock": false, 00:15:07.851 "num_base_bdevs": 4, 00:15:07.851 "num_base_bdevs_discovered": 3, 00:15:07.851 "num_base_bdevs_operational": 3, 00:15:07.851 "base_bdevs_list": [ 00:15:07.851 { 00:15:07.851 "name": "spare", 00:15:07.851 "uuid": "d29fc59f-7a29-514e-aae2-15a3e23f3b14", 00:15:07.851 "is_configured": true, 00:15:07.851 "data_offset": 0, 00:15:07.851 "data_size": 65536 00:15:07.851 }, 00:15:07.851 { 00:15:07.851 "name": null, 00:15:07.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.851 "is_configured": false, 00:15:07.851 "data_offset": 0, 00:15:07.851 "data_size": 65536 00:15:07.851 }, 00:15:07.851 { 00:15:07.851 "name": "BaseBdev3", 00:15:07.851 "uuid": "0e0d410c-6131-58c0-b502-0d8ef419b17f", 00:15:07.851 "is_configured": true, 00:15:07.851 "data_offset": 0, 00:15:07.851 "data_size": 65536 00:15:07.851 }, 00:15:07.851 { 00:15:07.851 "name": "BaseBdev4", 00:15:07.851 "uuid": "8c16be7e-67a4-5cfc-8bd9-475c71cf07aa", 00:15:07.852 "is_configured": true, 00:15:07.852 "data_offset": 0, 00:15:07.852 "data_size": 65536 00:15:07.852 } 00:15:07.852 ] 00:15:07.852 }' 00:15:07.852 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.852 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.112 90.89 IOPS, 272.67 MiB/s 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:08.112 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.112 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.112 [2024-09-30 12:32:19.933964] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.112 [2024-09-30 12:32:19.933999] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.112 00:15:08.112 Latency(us) 00:15:08.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.112 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:08.112 raid_bdev1 : 9.11 89.99 269.98 0.00 0.00 15719.49 302.28 109894.43 00:15:08.112 =================================================================================================================== 00:15:08.112 Total : 89.99 269.98 0.00 0.00 15719.49 302.28 109894.43 00:15:08.112 [2024-09-30 12:32:19.989667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.112 [2024-09-30 12:32:19.989708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.112 [2024-09-30 12:32:19.989817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.112 [2024-09-30 12:32:19.989831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:08.112 { 00:15:08.112 "results": [ 00:15:08.112 { 00:15:08.112 "job": "raid_bdev1", 00:15:08.112 "core_mask": "0x1", 00:15:08.112 "workload": "randrw", 00:15:08.112 "percentage": 50, 00:15:08.112 "status": "finished", 00:15:08.112 "queue_depth": 2, 00:15:08.112 "io_size": 3145728, 00:15:08.112 "runtime": 9.111843, 00:15:08.112 "iops": 89.99277094655824, 00:15:08.112 "mibps": 269.9783128396747, 00:15:08.112 "io_failed": 0, 00:15:08.112 "io_timeout": 0, 00:15:08.112 "avg_latency_us": 15719.491728618594, 00:15:08.112 "min_latency_us": 302.2812227074236, 00:15:08.112 "max_latency_us": 109894.42794759825 00:15:08.112 } 00:15:08.112 ], 00:15:08.112 "core_count": 1 00:15:08.112 } 00:15:08.112 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.112 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.112 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.112 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:08.112 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.372 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:08.372 /dev/nbd0 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.632 1+0 records in 00:15:08.632 1+0 records out 00:15:08.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464501 s, 8.8 MB/s 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.632 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:08.633 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.633 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.633 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:08.633 /dev/nbd1 00:15:08.633 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.893 1+0 records in 00:15:08.893 1+0 records out 00:15:08.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378173 s, 10.8 MB/s 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.893 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.153 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:09.413 /dev/nbd1 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.413 1+0 records in 00:15:09.413 1+0 records out 00:15:09.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392082 s, 10.4 MB/s 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.413 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.674 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78592 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 78592 ']' 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 78592 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78592 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:09.934 killing process with pid 78592 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78592' 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 78592 00:15:09.934 Received shutdown signal, test time was about 10.888016 seconds 00:15:09.934 00:15:09.934 Latency(us) 00:15:09.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.934 =================================================================================================================== 00:15:09.934 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.934 [2024-09-30 12:32:21.742568] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.934 12:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 78592 00:15:10.504 [2024-09-30 12:32:22.135911] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:11.887 00:15:11.887 real 0m14.377s 00:15:11.887 user 0m17.939s 00:15:11.887 sys 0m1.950s 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.887 ************************************ 00:15:11.887 END TEST raid_rebuild_test_io 00:15:11.887 ************************************ 00:15:11.887 12:32:23 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:11.887 12:32:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:11.887 12:32:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:11.887 12:32:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:11.887 ************************************ 00:15:11.887 START TEST raid_rebuild_test_sb_io 00:15:11.887 ************************************ 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79020 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79020 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79020 ']' 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.887 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.887 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:11.887 Zero copy mechanism will not be used. 00:15:11.887 [2024-09-30 12:32:23.549394] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:11.887 [2024-09-30 12:32:23.549528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79020 ] 00:15:11.887 [2024-09-30 12:32:23.713553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.147 [2024-09-30 12:32:23.910326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.406 [2024-09-30 12:32:24.101780] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.406 [2024-09-30 12:32:24.101822] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.667 BaseBdev1_malloc 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.667 [2024-09-30 12:32:24.408556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:12.667 [2024-09-30 12:32:24.408632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.667 [2024-09-30 12:32:24.408654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:12.667 [2024-09-30 12:32:24.408667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.667 [2024-09-30 12:32:24.410605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.667 [2024-09-30 12:32:24.410643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:12.667 BaseBdev1 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.667 BaseBdev2_malloc 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.667 [2024-09-30 12:32:24.492274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:12.667 [2024-09-30 12:32:24.492331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.667 [2024-09-30 12:32:24.492349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:12.667 [2024-09-30 12:32:24.492360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.667 [2024-09-30 12:32:24.494264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.667 [2024-09-30 12:32:24.494302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:12.667 BaseBdev2 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.667 BaseBdev3_malloc 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.667 [2024-09-30 12:32:24.544668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:12.667 [2024-09-30 12:32:24.544722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.667 [2024-09-30 12:32:24.544751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:12.667 [2024-09-30 12:32:24.544761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.667 [2024-09-30 12:32:24.546709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.667 [2024-09-30 12:32:24.546758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:12.667 BaseBdev3 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.667 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.927 BaseBdev4_malloc 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.927 [2024-09-30 12:32:24.595099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:12.927 [2024-09-30 12:32:24.595147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.927 [2024-09-30 12:32:24.595163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:12.927 [2024-09-30 12:32:24.595172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.927 [2024-09-30 12:32:24.597133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.927 [2024-09-30 12:32:24.597176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:12.927 BaseBdev4 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.927 spare_malloc 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.927 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.928 spare_delay 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.928 [2024-09-30 12:32:24.660250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.928 [2024-09-30 12:32:24.660318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.928 [2024-09-30 12:32:24.660337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:12.928 [2024-09-30 12:32:24.660347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.928 [2024-09-30 12:32:24.662359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.928 [2024-09-30 12:32:24.662396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.928 spare 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.928 [2024-09-30 12:32:24.672283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.928 [2024-09-30 12:32:24.673959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.928 [2024-09-30 12:32:24.674040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.928 [2024-09-30 12:32:24.674091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:12.928 [2024-09-30 12:32:24.674258] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:12.928 [2024-09-30 12:32:24.674278] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:12.928 [2024-09-30 12:32:24.674506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:12.928 [2024-09-30 12:32:24.674670] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:12.928 [2024-09-30 12:32:24.674687] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:12.928 [2024-09-30 12:32:24.674841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.928 "name": "raid_bdev1", 00:15:12.928 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:12.928 "strip_size_kb": 0, 00:15:12.928 "state": "online", 00:15:12.928 "raid_level": "raid1", 00:15:12.928 "superblock": true, 00:15:12.928 "num_base_bdevs": 4, 00:15:12.928 "num_base_bdevs_discovered": 4, 00:15:12.928 "num_base_bdevs_operational": 4, 00:15:12.928 "base_bdevs_list": [ 00:15:12.928 { 00:15:12.928 "name": "BaseBdev1", 00:15:12.928 "uuid": "a360bf5a-2f8c-5518-b6a3-5c7152c84ec7", 00:15:12.928 "is_configured": true, 00:15:12.928 "data_offset": 2048, 00:15:12.928 "data_size": 63488 00:15:12.928 }, 00:15:12.928 { 00:15:12.928 "name": "BaseBdev2", 00:15:12.928 "uuid": "df146eb4-1b75-5b1d-9bf4-b5d76a93ce47", 00:15:12.928 "is_configured": true, 00:15:12.928 "data_offset": 2048, 00:15:12.928 "data_size": 63488 00:15:12.928 }, 00:15:12.928 { 00:15:12.928 "name": "BaseBdev3", 00:15:12.928 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:12.928 "is_configured": true, 00:15:12.928 "data_offset": 2048, 00:15:12.928 "data_size": 63488 00:15:12.928 }, 00:15:12.928 { 00:15:12.928 "name": "BaseBdev4", 00:15:12.928 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:12.928 "is_configured": true, 00:15:12.928 "data_offset": 2048, 00:15:12.928 "data_size": 63488 00:15:12.928 } 00:15:12.928 ] 00:15:12.928 }' 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.928 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.498 [2024-09-30 12:32:25.171790] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.498 [2024-09-30 12:32:25.267456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.498 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.499 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.499 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.499 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.499 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.499 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.499 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.499 "name": "raid_bdev1", 00:15:13.499 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:13.499 "strip_size_kb": 0, 00:15:13.499 "state": "online", 00:15:13.499 "raid_level": "raid1", 00:15:13.499 "superblock": true, 00:15:13.499 "num_base_bdevs": 4, 00:15:13.499 "num_base_bdevs_discovered": 3, 00:15:13.499 "num_base_bdevs_operational": 3, 00:15:13.499 "base_bdevs_list": [ 00:15:13.499 { 00:15:13.499 "name": null, 00:15:13.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.499 "is_configured": false, 00:15:13.499 "data_offset": 0, 00:15:13.499 "data_size": 63488 00:15:13.499 }, 00:15:13.499 { 00:15:13.499 "name": "BaseBdev2", 00:15:13.499 "uuid": "df146eb4-1b75-5b1d-9bf4-b5d76a93ce47", 00:15:13.499 "is_configured": true, 00:15:13.499 "data_offset": 2048, 00:15:13.499 "data_size": 63488 00:15:13.499 }, 00:15:13.499 { 00:15:13.499 "name": "BaseBdev3", 00:15:13.499 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:13.499 "is_configured": true, 00:15:13.499 "data_offset": 2048, 00:15:13.499 "data_size": 63488 00:15:13.499 }, 00:15:13.499 { 00:15:13.499 "name": "BaseBdev4", 00:15:13.499 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:13.499 "is_configured": true, 00:15:13.499 "data_offset": 2048, 00:15:13.499 "data_size": 63488 00:15:13.499 } 00:15:13.499 ] 00:15:13.499 }' 00:15:13.499 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.499 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.499 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:13.499 Zero copy mechanism will not be used. 00:15:13.499 Running I/O for 60 seconds... 00:15:13.499 [2024-09-30 12:32:25.383065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:14.069 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.069 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.069 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.069 [2024-09-30 12:32:25.685000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.069 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.069 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:14.069 [2024-09-30 12:32:25.729869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:14.069 [2024-09-30 12:32:25.731674] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.069 [2024-09-30 12:32:25.840052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:14.069 [2024-09-30 12:32:25.840612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:14.069 [2024-09-30 12:32:25.960057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:14.069 [2024-09-30 12:32:25.960809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:14.638 [2024-09-30 12:32:26.321534] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:14.898 193.00 IOPS, 579.00 MiB/s [2024-09-30 12:32:26.545779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:14.899 [2024-09-30 12:32:26.546105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.899 "name": "raid_bdev1", 00:15:14.899 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:14.899 "strip_size_kb": 0, 00:15:14.899 "state": "online", 00:15:14.899 "raid_level": "raid1", 00:15:14.899 "superblock": true, 00:15:14.899 "num_base_bdevs": 4, 00:15:14.899 "num_base_bdevs_discovered": 4, 00:15:14.899 "num_base_bdevs_operational": 4, 00:15:14.899 "process": { 00:15:14.899 "type": "rebuild", 00:15:14.899 "target": "spare", 00:15:14.899 "progress": { 00:15:14.899 "blocks": 12288, 00:15:14.899 "percent": 19 00:15:14.899 } 00:15:14.899 }, 00:15:14.899 "base_bdevs_list": [ 00:15:14.899 { 00:15:14.899 "name": "spare", 00:15:14.899 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:14.899 "is_configured": true, 00:15:14.899 "data_offset": 2048, 00:15:14.899 "data_size": 63488 00:15:14.899 }, 00:15:14.899 { 00:15:14.899 "name": "BaseBdev2", 00:15:14.899 "uuid": "df146eb4-1b75-5b1d-9bf4-b5d76a93ce47", 00:15:14.899 "is_configured": true, 00:15:14.899 "data_offset": 2048, 00:15:14.899 "data_size": 63488 00:15:14.899 }, 00:15:14.899 { 00:15:14.899 "name": "BaseBdev3", 00:15:14.899 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:14.899 "is_configured": true, 00:15:14.899 "data_offset": 2048, 00:15:14.899 "data_size": 63488 00:15:14.899 }, 00:15:14.899 { 00:15:14.899 "name": "BaseBdev4", 00:15:14.899 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:14.899 "is_configured": true, 00:15:14.899 "data_offset": 2048, 00:15:14.899 "data_size": 63488 00:15:14.899 } 00:15:14.899 ] 00:15:14.899 }' 00:15:14.899 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.899 [2024-09-30 12:32:26.785036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:14.899 [2024-09-30 12:32:26.786369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:15.159 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.160 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.160 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.160 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:15.160 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.160 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.160 [2024-09-30 12:32:26.876984] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.160 [2024-09-30 12:32:26.995432] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:15.160 [2024-09-30 12:32:26.998492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.160 [2024-09-30 12:32:26.998539] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.160 [2024-09-30 12:32:26.998550] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:15.160 [2024-09-30 12:32:27.032364] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.160 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.420 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.420 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.420 "name": "raid_bdev1", 00:15:15.420 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:15.420 "strip_size_kb": 0, 00:15:15.420 "state": "online", 00:15:15.420 "raid_level": "raid1", 00:15:15.420 "superblock": true, 00:15:15.420 "num_base_bdevs": 4, 00:15:15.420 "num_base_bdevs_discovered": 3, 00:15:15.420 "num_base_bdevs_operational": 3, 00:15:15.420 "base_bdevs_list": [ 00:15:15.420 { 00:15:15.420 "name": null, 00:15:15.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.420 "is_configured": false, 00:15:15.420 "data_offset": 0, 00:15:15.420 "data_size": 63488 00:15:15.420 }, 00:15:15.420 { 00:15:15.420 "name": "BaseBdev2", 00:15:15.420 "uuid": "df146eb4-1b75-5b1d-9bf4-b5d76a93ce47", 00:15:15.420 "is_configured": true, 00:15:15.420 "data_offset": 2048, 00:15:15.420 "data_size": 63488 00:15:15.420 }, 00:15:15.420 { 00:15:15.420 "name": "BaseBdev3", 00:15:15.420 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:15.420 "is_configured": true, 00:15:15.420 "data_offset": 2048, 00:15:15.420 "data_size": 63488 00:15:15.420 }, 00:15:15.420 { 00:15:15.420 "name": "BaseBdev4", 00:15:15.420 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:15.420 "is_configured": true, 00:15:15.420 "data_offset": 2048, 00:15:15.420 "data_size": 63488 00:15:15.420 } 00:15:15.420 ] 00:15:15.420 }' 00:15:15.420 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.420 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.680 182.00 IOPS, 546.00 MiB/s 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.681 "name": "raid_bdev1", 00:15:15.681 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:15.681 "strip_size_kb": 0, 00:15:15.681 "state": "online", 00:15:15.681 "raid_level": "raid1", 00:15:15.681 "superblock": true, 00:15:15.681 "num_base_bdevs": 4, 00:15:15.681 "num_base_bdevs_discovered": 3, 00:15:15.681 "num_base_bdevs_operational": 3, 00:15:15.681 "base_bdevs_list": [ 00:15:15.681 { 00:15:15.681 "name": null, 00:15:15.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.681 "is_configured": false, 00:15:15.681 "data_offset": 0, 00:15:15.681 "data_size": 63488 00:15:15.681 }, 00:15:15.681 { 00:15:15.681 "name": "BaseBdev2", 00:15:15.681 "uuid": "df146eb4-1b75-5b1d-9bf4-b5d76a93ce47", 00:15:15.681 "is_configured": true, 00:15:15.681 "data_offset": 2048, 00:15:15.681 "data_size": 63488 00:15:15.681 }, 00:15:15.681 { 00:15:15.681 "name": "BaseBdev3", 00:15:15.681 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:15.681 "is_configured": true, 00:15:15.681 "data_offset": 2048, 00:15:15.681 "data_size": 63488 00:15:15.681 }, 00:15:15.681 { 00:15:15.681 "name": "BaseBdev4", 00:15:15.681 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:15.681 "is_configured": true, 00:15:15.681 "data_offset": 2048, 00:15:15.681 "data_size": 63488 00:15:15.681 } 00:15:15.681 ] 00:15:15.681 }' 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.681 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.941 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.941 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:15.941 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.941 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.941 [2024-09-30 12:32:27.620633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.941 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.941 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:15.941 [2024-09-30 12:32:27.683622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:15.941 [2024-09-30 12:32:27.685391] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.941 [2024-09-30 12:32:27.813930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:15.941 [2024-09-30 12:32:27.814399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:16.201 [2024-09-30 12:32:27.934674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:16.201 [2024-09-30 12:32:27.935428] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:16.461 [2024-09-30 12:32:28.286014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:16.721 164.33 IOPS, 493.00 MiB/s [2024-09-30 12:32:28.503157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.981 "name": "raid_bdev1", 00:15:16.981 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:16.981 "strip_size_kb": 0, 00:15:16.981 "state": "online", 00:15:16.981 "raid_level": "raid1", 00:15:16.981 "superblock": true, 00:15:16.981 "num_base_bdevs": 4, 00:15:16.981 "num_base_bdevs_discovered": 4, 00:15:16.981 "num_base_bdevs_operational": 4, 00:15:16.981 "process": { 00:15:16.981 "type": "rebuild", 00:15:16.981 "target": "spare", 00:15:16.981 "progress": { 00:15:16.981 "blocks": 10240, 00:15:16.981 "percent": 16 00:15:16.981 } 00:15:16.981 }, 00:15:16.981 "base_bdevs_list": [ 00:15:16.981 { 00:15:16.981 "name": "spare", 00:15:16.981 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:16.981 "is_configured": true, 00:15:16.981 "data_offset": 2048, 00:15:16.981 "data_size": 63488 00:15:16.981 }, 00:15:16.981 { 00:15:16.981 "name": "BaseBdev2", 00:15:16.981 "uuid": "df146eb4-1b75-5b1d-9bf4-b5d76a93ce47", 00:15:16.981 "is_configured": true, 00:15:16.981 "data_offset": 2048, 00:15:16.981 "data_size": 63488 00:15:16.981 }, 00:15:16.981 { 00:15:16.981 "name": "BaseBdev3", 00:15:16.981 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:16.981 "is_configured": true, 00:15:16.981 "data_offset": 2048, 00:15:16.981 "data_size": 63488 00:15:16.981 }, 00:15:16.981 { 00:15:16.981 "name": "BaseBdev4", 00:15:16.981 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:16.981 "is_configured": true, 00:15:16.981 "data_offset": 2048, 00:15:16.981 "data_size": 63488 00:15:16.981 } 00:15:16.981 ] 00:15:16.981 }' 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:16.981 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:16.981 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.982 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.982 [2024-09-30 12:32:28.793042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.982 [2024-09-30 12:32:28.836397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:17.242 [2024-09-30 12:32:28.942683] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:17.242 [2024-09-30 12:32:28.942716] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.242 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.242 "name": "raid_bdev1", 00:15:17.242 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:17.242 "strip_size_kb": 0, 00:15:17.242 "state": "online", 00:15:17.242 "raid_level": "raid1", 00:15:17.242 "superblock": true, 00:15:17.242 "num_base_bdevs": 4, 00:15:17.242 "num_base_bdevs_discovered": 3, 00:15:17.242 "num_base_bdevs_operational": 3, 00:15:17.242 "process": { 00:15:17.242 "type": "rebuild", 00:15:17.242 "target": "spare", 00:15:17.242 "progress": { 00:15:17.242 "blocks": 14336, 00:15:17.242 "percent": 22 00:15:17.242 } 00:15:17.242 }, 00:15:17.242 "base_bdevs_list": [ 00:15:17.242 { 00:15:17.242 "name": "spare", 00:15:17.242 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:17.242 "is_configured": true, 00:15:17.242 "data_offset": 2048, 00:15:17.242 "data_size": 63488 00:15:17.242 }, 00:15:17.242 { 00:15:17.242 "name": null, 00:15:17.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.242 "is_configured": false, 00:15:17.242 "data_offset": 0, 00:15:17.242 "data_size": 63488 00:15:17.242 }, 00:15:17.242 { 00:15:17.242 "name": "BaseBdev3", 00:15:17.242 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:17.242 "is_configured": true, 00:15:17.242 "data_offset": 2048, 00:15:17.242 "data_size": 63488 00:15:17.242 }, 00:15:17.242 { 00:15:17.242 "name": "BaseBdev4", 00:15:17.242 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:17.242 "is_configured": true, 00:15:17.242 "data_offset": 2048, 00:15:17.242 "data_size": 63488 00:15:17.242 } 00:15:17.242 ] 00:15:17.242 }' 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.242 [2024-09-30 12:32:29.070720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.242 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.502 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.502 "name": "raid_bdev1", 00:15:17.502 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:17.502 "strip_size_kb": 0, 00:15:17.502 "state": "online", 00:15:17.502 "raid_level": "raid1", 00:15:17.502 "superblock": true, 00:15:17.502 "num_base_bdevs": 4, 00:15:17.502 "num_base_bdevs_discovered": 3, 00:15:17.502 "num_base_bdevs_operational": 3, 00:15:17.502 "process": { 00:15:17.502 "type": "rebuild", 00:15:17.502 "target": "spare", 00:15:17.502 "progress": { 00:15:17.502 "blocks": 16384, 00:15:17.502 "percent": 25 00:15:17.502 } 00:15:17.502 }, 00:15:17.502 "base_bdevs_list": [ 00:15:17.502 { 00:15:17.502 "name": "spare", 00:15:17.502 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:17.502 "is_configured": true, 00:15:17.502 "data_offset": 2048, 00:15:17.502 "data_size": 63488 00:15:17.502 }, 00:15:17.502 { 00:15:17.502 "name": null, 00:15:17.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.502 "is_configured": false, 00:15:17.502 "data_offset": 0, 00:15:17.502 "data_size": 63488 00:15:17.502 }, 00:15:17.502 { 00:15:17.502 "name": "BaseBdev3", 00:15:17.502 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:17.502 "is_configured": true, 00:15:17.502 "data_offset": 2048, 00:15:17.502 "data_size": 63488 00:15:17.502 }, 00:15:17.502 { 00:15:17.502 "name": "BaseBdev4", 00:15:17.502 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:17.502 "is_configured": true, 00:15:17.502 "data_offset": 2048, 00:15:17.502 "data_size": 63488 00:15:17.502 } 00:15:17.502 ] 00:15:17.502 }' 00:15:17.502 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.502 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.502 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.502 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.502 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.071 141.25 IOPS, 423.75 MiB/s [2024-09-30 12:32:29.745286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:18.332 [2024-09-30 12:32:30.089529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:18.332 [2024-09-30 12:32:30.090348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.592 "name": "raid_bdev1", 00:15:18.592 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:18.592 "strip_size_kb": 0, 00:15:18.592 "state": "online", 00:15:18.592 "raid_level": "raid1", 00:15:18.592 "superblock": true, 00:15:18.592 "num_base_bdevs": 4, 00:15:18.592 "num_base_bdevs_discovered": 3, 00:15:18.592 "num_base_bdevs_operational": 3, 00:15:18.592 "process": { 00:15:18.592 "type": "rebuild", 00:15:18.592 "target": "spare", 00:15:18.592 "progress": { 00:15:18.592 "blocks": 32768, 00:15:18.592 "percent": 51 00:15:18.592 } 00:15:18.592 }, 00:15:18.592 "base_bdevs_list": [ 00:15:18.592 { 00:15:18.592 "name": "spare", 00:15:18.592 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:18.592 "is_configured": true, 00:15:18.592 "data_offset": 2048, 00:15:18.592 "data_size": 63488 00:15:18.592 }, 00:15:18.592 { 00:15:18.592 "name": null, 00:15:18.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.592 "is_configured": false, 00:15:18.592 "data_offset": 0, 00:15:18.592 "data_size": 63488 00:15:18.592 }, 00:15:18.592 { 00:15:18.592 "name": "BaseBdev3", 00:15:18.592 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:18.592 "is_configured": true, 00:15:18.592 "data_offset": 2048, 00:15:18.592 "data_size": 63488 00:15:18.592 }, 00:15:18.592 { 00:15:18.592 "name": "BaseBdev4", 00:15:18.592 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:18.592 "is_configured": true, 00:15:18.592 "data_offset": 2048, 00:15:18.592 "data_size": 63488 00:15:18.592 } 00:15:18.592 ] 00:15:18.592 }' 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.592 [2024-09-30 12:32:30.317323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:18.592 [2024-09-30 12:32:30.317713] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.592 128.00 IOPS, 384.00 MiB/s 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.592 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.531 [2024-09-30 12:32:31.315889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:19.531 113.50 IOPS, 340.50 MiB/s 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.531 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.531 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.531 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.531 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.531 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.531 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.531 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.531 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.531 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.790 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.790 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.790 "name": "raid_bdev1", 00:15:19.790 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:19.790 "strip_size_kb": 0, 00:15:19.790 "state": "online", 00:15:19.790 "raid_level": "raid1", 00:15:19.790 "superblock": true, 00:15:19.790 "num_base_bdevs": 4, 00:15:19.790 "num_base_bdevs_discovered": 3, 00:15:19.790 "num_base_bdevs_operational": 3, 00:15:19.790 "process": { 00:15:19.790 "type": "rebuild", 00:15:19.790 "target": "spare", 00:15:19.790 "progress": { 00:15:19.790 "blocks": 51200, 00:15:19.790 "percent": 80 00:15:19.790 } 00:15:19.790 }, 00:15:19.790 "base_bdevs_list": [ 00:15:19.790 { 00:15:19.790 "name": "spare", 00:15:19.790 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:19.790 "is_configured": true, 00:15:19.790 "data_offset": 2048, 00:15:19.790 "data_size": 63488 00:15:19.790 }, 00:15:19.790 { 00:15:19.790 "name": null, 00:15:19.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.790 "is_configured": false, 00:15:19.790 "data_offset": 0, 00:15:19.790 "data_size": 63488 00:15:19.790 }, 00:15:19.790 { 00:15:19.790 "name": "BaseBdev3", 00:15:19.790 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:19.790 "is_configured": true, 00:15:19.790 "data_offset": 2048, 00:15:19.790 "data_size": 63488 00:15:19.790 }, 00:15:19.790 { 00:15:19.790 "name": "BaseBdev4", 00:15:19.790 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:19.790 "is_configured": true, 00:15:19.790 "data_offset": 2048, 00:15:19.790 "data_size": 63488 00:15:19.790 } 00:15:19.790 ] 00:15:19.790 }' 00:15:19.790 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.790 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.790 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.790 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.790 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.359 [2024-09-30 12:32:31.980242] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:20.360 [2024-09-30 12:32:32.085041] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:20.360 [2024-09-30 12:32:32.086858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.878 102.29 IOPS, 306.86 MiB/s 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.878 "name": "raid_bdev1", 00:15:20.878 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:20.878 "strip_size_kb": 0, 00:15:20.878 "state": "online", 00:15:20.878 "raid_level": "raid1", 00:15:20.878 "superblock": true, 00:15:20.878 "num_base_bdevs": 4, 00:15:20.878 "num_base_bdevs_discovered": 3, 00:15:20.878 "num_base_bdevs_operational": 3, 00:15:20.878 "base_bdevs_list": [ 00:15:20.878 { 00:15:20.878 "name": "spare", 00:15:20.878 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:20.878 "is_configured": true, 00:15:20.878 "data_offset": 2048, 00:15:20.878 "data_size": 63488 00:15:20.878 }, 00:15:20.878 { 00:15:20.878 "name": null, 00:15:20.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.878 "is_configured": false, 00:15:20.878 "data_offset": 0, 00:15:20.878 "data_size": 63488 00:15:20.878 }, 00:15:20.878 { 00:15:20.878 "name": "BaseBdev3", 00:15:20.878 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:20.878 "is_configured": true, 00:15:20.878 "data_offset": 2048, 00:15:20.878 "data_size": 63488 00:15:20.878 }, 00:15:20.878 { 00:15:20.878 "name": "BaseBdev4", 00:15:20.878 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:20.878 "is_configured": true, 00:15:20.878 "data_offset": 2048, 00:15:20.878 "data_size": 63488 00:15:20.878 } 00:15:20.878 ] 00:15:20.878 }' 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.878 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.878 "name": "raid_bdev1", 00:15:20.878 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:20.878 "strip_size_kb": 0, 00:15:20.878 "state": "online", 00:15:20.878 "raid_level": "raid1", 00:15:20.878 "superblock": true, 00:15:20.878 "num_base_bdevs": 4, 00:15:20.878 "num_base_bdevs_discovered": 3, 00:15:20.878 "num_base_bdevs_operational": 3, 00:15:20.878 "base_bdevs_list": [ 00:15:20.878 { 00:15:20.878 "name": "spare", 00:15:20.878 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:20.878 "is_configured": true, 00:15:20.878 "data_offset": 2048, 00:15:20.879 "data_size": 63488 00:15:20.879 }, 00:15:20.879 { 00:15:20.879 "name": null, 00:15:20.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.879 "is_configured": false, 00:15:20.879 "data_offset": 0, 00:15:20.879 "data_size": 63488 00:15:20.879 }, 00:15:20.879 { 00:15:20.879 "name": "BaseBdev3", 00:15:20.879 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:20.879 "is_configured": true, 00:15:20.879 "data_offset": 2048, 00:15:20.879 "data_size": 63488 00:15:20.879 }, 00:15:20.879 { 00:15:20.879 "name": "BaseBdev4", 00:15:20.879 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:20.879 "is_configured": true, 00:15:20.879 "data_offset": 2048, 00:15:20.879 "data_size": 63488 00:15:20.879 } 00:15:20.879 ] 00:15:20.879 }' 00:15:20.879 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.138 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.138 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.138 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.138 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:21.138 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.138 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.139 "name": "raid_bdev1", 00:15:21.139 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:21.139 "strip_size_kb": 0, 00:15:21.139 "state": "online", 00:15:21.139 "raid_level": "raid1", 00:15:21.139 "superblock": true, 00:15:21.139 "num_base_bdevs": 4, 00:15:21.139 "num_base_bdevs_discovered": 3, 00:15:21.139 "num_base_bdevs_operational": 3, 00:15:21.139 "base_bdevs_list": [ 00:15:21.139 { 00:15:21.139 "name": "spare", 00:15:21.139 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:21.139 "is_configured": true, 00:15:21.139 "data_offset": 2048, 00:15:21.139 "data_size": 63488 00:15:21.139 }, 00:15:21.139 { 00:15:21.139 "name": null, 00:15:21.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.139 "is_configured": false, 00:15:21.139 "data_offset": 0, 00:15:21.139 "data_size": 63488 00:15:21.139 }, 00:15:21.139 { 00:15:21.139 "name": "BaseBdev3", 00:15:21.139 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:21.139 "is_configured": true, 00:15:21.139 "data_offset": 2048, 00:15:21.139 "data_size": 63488 00:15:21.139 }, 00:15:21.139 { 00:15:21.139 "name": "BaseBdev4", 00:15:21.139 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:21.139 "is_configured": true, 00:15:21.139 "data_offset": 2048, 00:15:21.139 "data_size": 63488 00:15:21.139 } 00:15:21.139 ] 00:15:21.139 }' 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.139 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.707 [2024-09-30 12:32:33.309951] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.707 [2024-09-30 12:32:33.309987] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.707 93.88 IOPS, 281.62 MiB/s 00:15:21.707 Latency(us) 00:15:21.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.707 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:21.707 raid_bdev1 : 8.02 93.81 281.43 0.00 0.00 14806.31 289.76 110810.21 00:15:21.707 =================================================================================================================== 00:15:21.707 Total : 93.81 281.43 0.00 0.00 14806.31 289.76 110810.21 00:15:21.707 [2024-09-30 12:32:33.404788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.707 [2024-09-30 12:32:33.404828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.707 [2024-09-30 12:32:33.404917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.707 [2024-09-30 12:32:33.404927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:21.707 { 00:15:21.707 "results": [ 00:15:21.707 { 00:15:21.707 "job": "raid_bdev1", 00:15:21.707 "core_mask": "0x1", 00:15:21.707 "workload": "randrw", 00:15:21.707 "percentage": 50, 00:15:21.707 "status": "finished", 00:15:21.707 "queue_depth": 2, 00:15:21.707 "io_size": 3145728, 00:15:21.707 "runtime": 8.016309, 00:15:21.707 "iops": 93.80875911844217, 00:15:21.707 "mibps": 281.42627735532653, 00:15:21.707 "io_failed": 0, 00:15:21.707 "io_timeout": 0, 00:15:21.707 "avg_latency_us": 14806.305751184616, 00:15:21.707 "min_latency_us": 289.7606986899563, 00:15:21.707 "max_latency_us": 110810.21484716157 00:15:21.707 } 00:15:21.707 ], 00:15:21.707 "core_count": 1 00:15:21.707 } 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:21.707 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.708 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:21.708 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.708 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:21.708 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.708 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:21.708 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.708 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.708 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:21.967 /dev/nbd0 00:15:21.967 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:21.967 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:21.967 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:21.967 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:21.967 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:21.967 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:21.967 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:21.967 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.968 1+0 records in 00:15:21.968 1+0 records out 00:15:21.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426846 s, 9.6 MB/s 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.968 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:22.227 /dev/nbd1 00:15:22.227 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:22.227 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:22.227 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:22.227 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.228 1+0 records in 00:15:22.228 1+0 records out 00:15:22.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479419 s, 8.5 MB/s 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.228 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:22.228 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:22.228 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.228 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:22.228 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.228 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:22.228 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.228 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.487 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:22.748 /dev/nbd1 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.748 1+0 records in 00:15:22.748 1+0 records out 00:15:22.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382807 s, 10.7 MB/s 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.748 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:23.007 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:23.007 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:23.007 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:23.007 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.007 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.007 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:23.007 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:23.007 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.007 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:23.007 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.007 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:23.008 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.008 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:23.008 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.008 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.267 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.267 [2024-09-30 12:32:35.061869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:23.267 [2024-09-30 12:32:35.061919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.267 [2024-09-30 12:32:35.061940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:23.268 [2024-09-30 12:32:35.061949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.268 [2024-09-30 12:32:35.063995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.268 [2024-09-30 12:32:35.064030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:23.268 [2024-09-30 12:32:35.064114] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:23.268 [2024-09-30 12:32:35.064162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.268 [2024-09-30 12:32:35.064302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.268 [2024-09-30 12:32:35.064411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:23.268 spare 00:15:23.268 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.268 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:23.268 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.268 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.527 [2024-09-30 12:32:35.164317] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:23.527 [2024-09-30 12:32:35.164342] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:23.527 [2024-09-30 12:32:35.164584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:23.527 [2024-09-30 12:32:35.164732] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:23.527 [2024-09-30 12:32:35.164763] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:23.527 [2024-09-30 12:32:35.164906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.527 "name": "raid_bdev1", 00:15:23.527 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:23.527 "strip_size_kb": 0, 00:15:23.527 "state": "online", 00:15:23.527 "raid_level": "raid1", 00:15:23.527 "superblock": true, 00:15:23.527 "num_base_bdevs": 4, 00:15:23.527 "num_base_bdevs_discovered": 3, 00:15:23.527 "num_base_bdevs_operational": 3, 00:15:23.527 "base_bdevs_list": [ 00:15:23.527 { 00:15:23.527 "name": "spare", 00:15:23.527 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:23.527 "is_configured": true, 00:15:23.527 "data_offset": 2048, 00:15:23.527 "data_size": 63488 00:15:23.527 }, 00:15:23.527 { 00:15:23.527 "name": null, 00:15:23.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.527 "is_configured": false, 00:15:23.527 "data_offset": 2048, 00:15:23.527 "data_size": 63488 00:15:23.527 }, 00:15:23.527 { 00:15:23.527 "name": "BaseBdev3", 00:15:23.527 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:23.527 "is_configured": true, 00:15:23.527 "data_offset": 2048, 00:15:23.527 "data_size": 63488 00:15:23.527 }, 00:15:23.527 { 00:15:23.527 "name": "BaseBdev4", 00:15:23.527 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:23.527 "is_configured": true, 00:15:23.527 "data_offset": 2048, 00:15:23.527 "data_size": 63488 00:15:23.527 } 00:15:23.527 ] 00:15:23.527 }' 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.527 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.786 "name": "raid_bdev1", 00:15:23.786 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:23.786 "strip_size_kb": 0, 00:15:23.786 "state": "online", 00:15:23.786 "raid_level": "raid1", 00:15:23.786 "superblock": true, 00:15:23.786 "num_base_bdevs": 4, 00:15:23.786 "num_base_bdevs_discovered": 3, 00:15:23.786 "num_base_bdevs_operational": 3, 00:15:23.786 "base_bdevs_list": [ 00:15:23.786 { 00:15:23.786 "name": "spare", 00:15:23.786 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:23.786 "is_configured": true, 00:15:23.786 "data_offset": 2048, 00:15:23.786 "data_size": 63488 00:15:23.786 }, 00:15:23.786 { 00:15:23.786 "name": null, 00:15:23.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.786 "is_configured": false, 00:15:23.786 "data_offset": 2048, 00:15:23.786 "data_size": 63488 00:15:23.786 }, 00:15:23.786 { 00:15:23.786 "name": "BaseBdev3", 00:15:23.786 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:23.786 "is_configured": true, 00:15:23.786 "data_offset": 2048, 00:15:23.786 "data_size": 63488 00:15:23.786 }, 00:15:23.786 { 00:15:23.786 "name": "BaseBdev4", 00:15:23.786 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:23.786 "is_configured": true, 00:15:23.786 "data_offset": 2048, 00:15:23.786 "data_size": 63488 00:15:23.786 } 00:15:23.786 ] 00:15:23.786 }' 00:15:23.786 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.045 [2024-09-30 12:32:35.776761] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.045 "name": "raid_bdev1", 00:15:24.045 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:24.045 "strip_size_kb": 0, 00:15:24.045 "state": "online", 00:15:24.045 "raid_level": "raid1", 00:15:24.045 "superblock": true, 00:15:24.045 "num_base_bdevs": 4, 00:15:24.045 "num_base_bdevs_discovered": 2, 00:15:24.045 "num_base_bdevs_operational": 2, 00:15:24.045 "base_bdevs_list": [ 00:15:24.045 { 00:15:24.045 "name": null, 00:15:24.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.045 "is_configured": false, 00:15:24.045 "data_offset": 0, 00:15:24.045 "data_size": 63488 00:15:24.045 }, 00:15:24.045 { 00:15:24.045 "name": null, 00:15:24.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.045 "is_configured": false, 00:15:24.045 "data_offset": 2048, 00:15:24.045 "data_size": 63488 00:15:24.045 }, 00:15:24.045 { 00:15:24.045 "name": "BaseBdev3", 00:15:24.045 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:24.045 "is_configured": true, 00:15:24.045 "data_offset": 2048, 00:15:24.045 "data_size": 63488 00:15:24.045 }, 00:15:24.045 { 00:15:24.045 "name": "BaseBdev4", 00:15:24.045 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:24.045 "is_configured": true, 00:15:24.045 "data_offset": 2048, 00:15:24.045 "data_size": 63488 00:15:24.045 } 00:15:24.045 ] 00:15:24.045 }' 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.045 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.304 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.304 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.304 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.563 [2024-09-30 12:32:36.204059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.563 [2024-09-30 12:32:36.204191] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:24.563 [2024-09-30 12:32:36.204210] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:24.563 [2024-09-30 12:32:36.204239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.563 [2024-09-30 12:32:36.217506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:24.563 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.563 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:24.563 [2024-09-30 12:32:36.219194] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.500 "name": "raid_bdev1", 00:15:25.500 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:25.500 "strip_size_kb": 0, 00:15:25.500 "state": "online", 00:15:25.500 "raid_level": "raid1", 00:15:25.500 "superblock": true, 00:15:25.500 "num_base_bdevs": 4, 00:15:25.500 "num_base_bdevs_discovered": 3, 00:15:25.500 "num_base_bdevs_operational": 3, 00:15:25.500 "process": { 00:15:25.500 "type": "rebuild", 00:15:25.500 "target": "spare", 00:15:25.500 "progress": { 00:15:25.500 "blocks": 20480, 00:15:25.500 "percent": 32 00:15:25.500 } 00:15:25.500 }, 00:15:25.500 "base_bdevs_list": [ 00:15:25.500 { 00:15:25.500 "name": "spare", 00:15:25.500 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:25.500 "is_configured": true, 00:15:25.500 "data_offset": 2048, 00:15:25.500 "data_size": 63488 00:15:25.500 }, 00:15:25.500 { 00:15:25.500 "name": null, 00:15:25.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.500 "is_configured": false, 00:15:25.500 "data_offset": 2048, 00:15:25.500 "data_size": 63488 00:15:25.500 }, 00:15:25.500 { 00:15:25.500 "name": "BaseBdev3", 00:15:25.500 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:25.500 "is_configured": true, 00:15:25.500 "data_offset": 2048, 00:15:25.500 "data_size": 63488 00:15:25.500 }, 00:15:25.500 { 00:15:25.500 "name": "BaseBdev4", 00:15:25.500 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:25.500 "is_configured": true, 00:15:25.500 "data_offset": 2048, 00:15:25.500 "data_size": 63488 00:15:25.500 } 00:15:25.500 ] 00:15:25.500 }' 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.500 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.500 [2024-09-30 12:32:37.367929] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.759 [2024-09-30 12:32:37.423777] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.759 [2024-09-30 12:32:37.423825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.759 [2024-09-30 12:32:37.423842] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.759 [2024-09-30 12:32:37.423849] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.759 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.759 "name": "raid_bdev1", 00:15:25.759 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:25.759 "strip_size_kb": 0, 00:15:25.759 "state": "online", 00:15:25.759 "raid_level": "raid1", 00:15:25.759 "superblock": true, 00:15:25.759 "num_base_bdevs": 4, 00:15:25.759 "num_base_bdevs_discovered": 2, 00:15:25.759 "num_base_bdevs_operational": 2, 00:15:25.759 "base_bdevs_list": [ 00:15:25.759 { 00:15:25.759 "name": null, 00:15:25.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.759 "is_configured": false, 00:15:25.759 "data_offset": 0, 00:15:25.759 "data_size": 63488 00:15:25.759 }, 00:15:25.759 { 00:15:25.760 "name": null, 00:15:25.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.760 "is_configured": false, 00:15:25.760 "data_offset": 2048, 00:15:25.760 "data_size": 63488 00:15:25.760 }, 00:15:25.760 { 00:15:25.760 "name": "BaseBdev3", 00:15:25.760 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:25.760 "is_configured": true, 00:15:25.760 "data_offset": 2048, 00:15:25.760 "data_size": 63488 00:15:25.760 }, 00:15:25.760 { 00:15:25.760 "name": "BaseBdev4", 00:15:25.760 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:25.760 "is_configured": true, 00:15:25.760 "data_offset": 2048, 00:15:25.760 "data_size": 63488 00:15:25.760 } 00:15:25.760 ] 00:15:25.760 }' 00:15:25.760 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.760 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.019 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:26.019 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.019 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.019 [2024-09-30 12:32:37.901759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:26.019 [2024-09-30 12:32:37.901807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.019 [2024-09-30 12:32:37.901830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:26.019 [2024-09-30 12:32:37.901838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.019 [2024-09-30 12:32:37.902288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.019 [2024-09-30 12:32:37.902306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:26.019 [2024-09-30 12:32:37.902379] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:26.019 [2024-09-30 12:32:37.902390] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:26.019 [2024-09-30 12:32:37.902400] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:26.019 [2024-09-30 12:32:37.902420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.279 [2024-09-30 12:32:37.915342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:26.279 spare 00:15:26.279 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.279 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:26.279 [2024-09-30 12:32:37.917114] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.218 "name": "raid_bdev1", 00:15:27.218 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:27.218 "strip_size_kb": 0, 00:15:27.218 "state": "online", 00:15:27.218 "raid_level": "raid1", 00:15:27.218 "superblock": true, 00:15:27.218 "num_base_bdevs": 4, 00:15:27.218 "num_base_bdevs_discovered": 3, 00:15:27.218 "num_base_bdevs_operational": 3, 00:15:27.218 "process": { 00:15:27.218 "type": "rebuild", 00:15:27.218 "target": "spare", 00:15:27.218 "progress": { 00:15:27.218 "blocks": 20480, 00:15:27.218 "percent": 32 00:15:27.218 } 00:15:27.218 }, 00:15:27.218 "base_bdevs_list": [ 00:15:27.218 { 00:15:27.218 "name": "spare", 00:15:27.218 "uuid": "5f76b7b0-9051-5a6e-96e7-e65b38b509fd", 00:15:27.218 "is_configured": true, 00:15:27.218 "data_offset": 2048, 00:15:27.218 "data_size": 63488 00:15:27.218 }, 00:15:27.218 { 00:15:27.218 "name": null, 00:15:27.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.218 "is_configured": false, 00:15:27.218 "data_offset": 2048, 00:15:27.218 "data_size": 63488 00:15:27.218 }, 00:15:27.218 { 00:15:27.218 "name": "BaseBdev3", 00:15:27.218 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:27.218 "is_configured": true, 00:15:27.218 "data_offset": 2048, 00:15:27.218 "data_size": 63488 00:15:27.218 }, 00:15:27.218 { 00:15:27.218 "name": "BaseBdev4", 00:15:27.218 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:27.218 "is_configured": true, 00:15:27.218 "data_offset": 2048, 00:15:27.218 "data_size": 63488 00:15:27.218 } 00:15:27.218 ] 00:15:27.218 }' 00:15:27.218 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.218 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.218 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.218 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.218 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:27.218 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.218 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.218 [2024-09-30 12:32:39.077199] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.478 [2024-09-30 12:32:39.121534] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:27.478 [2024-09-30 12:32:39.121587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.478 [2024-09-30 12:32:39.121601] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.478 [2024-09-30 12:32:39.121612] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.478 "name": "raid_bdev1", 00:15:27.478 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:27.478 "strip_size_kb": 0, 00:15:27.478 "state": "online", 00:15:27.478 "raid_level": "raid1", 00:15:27.478 "superblock": true, 00:15:27.478 "num_base_bdevs": 4, 00:15:27.478 "num_base_bdevs_discovered": 2, 00:15:27.478 "num_base_bdevs_operational": 2, 00:15:27.478 "base_bdevs_list": [ 00:15:27.478 { 00:15:27.478 "name": null, 00:15:27.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.478 "is_configured": false, 00:15:27.478 "data_offset": 0, 00:15:27.478 "data_size": 63488 00:15:27.478 }, 00:15:27.478 { 00:15:27.478 "name": null, 00:15:27.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.478 "is_configured": false, 00:15:27.478 "data_offset": 2048, 00:15:27.478 "data_size": 63488 00:15:27.478 }, 00:15:27.478 { 00:15:27.478 "name": "BaseBdev3", 00:15:27.478 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:27.478 "is_configured": true, 00:15:27.478 "data_offset": 2048, 00:15:27.478 "data_size": 63488 00:15:27.478 }, 00:15:27.478 { 00:15:27.478 "name": "BaseBdev4", 00:15:27.478 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:27.478 "is_configured": true, 00:15:27.478 "data_offset": 2048, 00:15:27.478 "data_size": 63488 00:15:27.478 } 00:15:27.478 ] 00:15:27.478 }' 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.478 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.738 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.738 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.738 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.738 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.738 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.738 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.738 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.738 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.738 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.738 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.738 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.738 "name": "raid_bdev1", 00:15:27.738 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:27.738 "strip_size_kb": 0, 00:15:27.738 "state": "online", 00:15:27.738 "raid_level": "raid1", 00:15:27.738 "superblock": true, 00:15:27.738 "num_base_bdevs": 4, 00:15:27.738 "num_base_bdevs_discovered": 2, 00:15:27.738 "num_base_bdevs_operational": 2, 00:15:27.738 "base_bdevs_list": [ 00:15:27.738 { 00:15:27.738 "name": null, 00:15:27.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.738 "is_configured": false, 00:15:27.738 "data_offset": 0, 00:15:27.738 "data_size": 63488 00:15:27.738 }, 00:15:27.738 { 00:15:27.738 "name": null, 00:15:27.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.738 "is_configured": false, 00:15:27.738 "data_offset": 2048, 00:15:27.738 "data_size": 63488 00:15:27.738 }, 00:15:27.738 { 00:15:27.739 "name": "BaseBdev3", 00:15:27.739 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:27.739 "is_configured": true, 00:15:27.739 "data_offset": 2048, 00:15:27.739 "data_size": 63488 00:15:27.739 }, 00:15:27.739 { 00:15:27.739 "name": "BaseBdev4", 00:15:27.739 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:27.739 "is_configured": true, 00:15:27.739 "data_offset": 2048, 00:15:27.739 "data_size": 63488 00:15:27.739 } 00:15:27.739 ] 00:15:27.739 }' 00:15:27.739 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.739 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.739 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.999 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.999 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:27.999 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.999 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.999 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.999 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:27.999 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.999 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.999 [2024-09-30 12:32:39.683322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:27.999 [2024-09-30 12:32:39.683374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.999 [2024-09-30 12:32:39.683408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:27.999 [2024-09-30 12:32:39.683419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.999 [2024-09-30 12:32:39.683818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.999 [2024-09-30 12:32:39.683839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:27.999 [2024-09-30 12:32:39.683904] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:27.999 [2024-09-30 12:32:39.683919] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:27.999 [2024-09-30 12:32:39.683927] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:27.999 [2024-09-30 12:32:39.683940] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:27.999 BaseBdev1 00:15:27.999 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.999 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.939 "name": "raid_bdev1", 00:15:28.939 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:28.939 "strip_size_kb": 0, 00:15:28.939 "state": "online", 00:15:28.939 "raid_level": "raid1", 00:15:28.939 "superblock": true, 00:15:28.939 "num_base_bdevs": 4, 00:15:28.939 "num_base_bdevs_discovered": 2, 00:15:28.939 "num_base_bdevs_operational": 2, 00:15:28.939 "base_bdevs_list": [ 00:15:28.939 { 00:15:28.939 "name": null, 00:15:28.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.939 "is_configured": false, 00:15:28.939 "data_offset": 0, 00:15:28.939 "data_size": 63488 00:15:28.939 }, 00:15:28.939 { 00:15:28.939 "name": null, 00:15:28.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.939 "is_configured": false, 00:15:28.939 "data_offset": 2048, 00:15:28.939 "data_size": 63488 00:15:28.939 }, 00:15:28.939 { 00:15:28.939 "name": "BaseBdev3", 00:15:28.939 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:28.939 "is_configured": true, 00:15:28.939 "data_offset": 2048, 00:15:28.939 "data_size": 63488 00:15:28.939 }, 00:15:28.939 { 00:15:28.939 "name": "BaseBdev4", 00:15:28.939 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:28.939 "is_configured": true, 00:15:28.939 "data_offset": 2048, 00:15:28.939 "data_size": 63488 00:15:28.939 } 00:15:28.939 ] 00:15:28.939 }' 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.939 12:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.509 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.509 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.509 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.509 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.509 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.509 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.509 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.509 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.509 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.509 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.509 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.509 "name": "raid_bdev1", 00:15:29.509 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:29.509 "strip_size_kb": 0, 00:15:29.509 "state": "online", 00:15:29.509 "raid_level": "raid1", 00:15:29.509 "superblock": true, 00:15:29.509 "num_base_bdevs": 4, 00:15:29.509 "num_base_bdevs_discovered": 2, 00:15:29.509 "num_base_bdevs_operational": 2, 00:15:29.509 "base_bdevs_list": [ 00:15:29.509 { 00:15:29.509 "name": null, 00:15:29.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.509 "is_configured": false, 00:15:29.509 "data_offset": 0, 00:15:29.509 "data_size": 63488 00:15:29.509 }, 00:15:29.509 { 00:15:29.510 "name": null, 00:15:29.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.510 "is_configured": false, 00:15:29.510 "data_offset": 2048, 00:15:29.510 "data_size": 63488 00:15:29.510 }, 00:15:29.510 { 00:15:29.510 "name": "BaseBdev3", 00:15:29.510 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:29.510 "is_configured": true, 00:15:29.510 "data_offset": 2048, 00:15:29.510 "data_size": 63488 00:15:29.510 }, 00:15:29.510 { 00:15:29.510 "name": "BaseBdev4", 00:15:29.510 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:29.510 "is_configured": true, 00:15:29.510 "data_offset": 2048, 00:15:29.510 "data_size": 63488 00:15:29.510 } 00:15:29.510 ] 00:15:29.510 }' 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.510 [2024-09-30 12:32:41.296739] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.510 [2024-09-30 12:32:41.296869] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:29.510 [2024-09-30 12:32:41.296881] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:29.510 request: 00:15:29.510 { 00:15:29.510 "base_bdev": "BaseBdev1", 00:15:29.510 "raid_bdev": "raid_bdev1", 00:15:29.510 "method": "bdev_raid_add_base_bdev", 00:15:29.510 "req_id": 1 00:15:29.510 } 00:15:29.510 Got JSON-RPC error response 00:15:29.510 response: 00:15:29.510 { 00:15:29.510 "code": -22, 00:15:29.510 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:29.510 } 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.510 12:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:30.450 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:30.450 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.450 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.450 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.450 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.450 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.450 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.450 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.450 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.450 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.450 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.451 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.451 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.451 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.451 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.711 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.711 "name": "raid_bdev1", 00:15:30.711 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:30.711 "strip_size_kb": 0, 00:15:30.711 "state": "online", 00:15:30.711 "raid_level": "raid1", 00:15:30.711 "superblock": true, 00:15:30.711 "num_base_bdevs": 4, 00:15:30.711 "num_base_bdevs_discovered": 2, 00:15:30.711 "num_base_bdevs_operational": 2, 00:15:30.711 "base_bdevs_list": [ 00:15:30.711 { 00:15:30.711 "name": null, 00:15:30.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.711 "is_configured": false, 00:15:30.711 "data_offset": 0, 00:15:30.711 "data_size": 63488 00:15:30.711 }, 00:15:30.711 { 00:15:30.711 "name": null, 00:15:30.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.711 "is_configured": false, 00:15:30.711 "data_offset": 2048, 00:15:30.711 "data_size": 63488 00:15:30.711 }, 00:15:30.711 { 00:15:30.711 "name": "BaseBdev3", 00:15:30.711 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:30.711 "is_configured": true, 00:15:30.711 "data_offset": 2048, 00:15:30.711 "data_size": 63488 00:15:30.711 }, 00:15:30.711 { 00:15:30.711 "name": "BaseBdev4", 00:15:30.711 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:30.711 "is_configured": true, 00:15:30.711 "data_offset": 2048, 00:15:30.711 "data_size": 63488 00:15:30.711 } 00:15:30.711 ] 00:15:30.711 }' 00:15:30.711 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.711 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.971 "name": "raid_bdev1", 00:15:30.971 "uuid": "49400525-8795-4a4a-9635-e8eb806f656f", 00:15:30.971 "strip_size_kb": 0, 00:15:30.971 "state": "online", 00:15:30.971 "raid_level": "raid1", 00:15:30.971 "superblock": true, 00:15:30.971 "num_base_bdevs": 4, 00:15:30.971 "num_base_bdevs_discovered": 2, 00:15:30.971 "num_base_bdevs_operational": 2, 00:15:30.971 "base_bdevs_list": [ 00:15:30.971 { 00:15:30.971 "name": null, 00:15:30.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.971 "is_configured": false, 00:15:30.971 "data_offset": 0, 00:15:30.971 "data_size": 63488 00:15:30.971 }, 00:15:30.971 { 00:15:30.971 "name": null, 00:15:30.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.971 "is_configured": false, 00:15:30.971 "data_offset": 2048, 00:15:30.971 "data_size": 63488 00:15:30.971 }, 00:15:30.971 { 00:15:30.971 "name": "BaseBdev3", 00:15:30.971 "uuid": "7b2dbf9c-eceb-5137-9ffc-d0432458d708", 00:15:30.971 "is_configured": true, 00:15:30.971 "data_offset": 2048, 00:15:30.971 "data_size": 63488 00:15:30.971 }, 00:15:30.971 { 00:15:30.971 "name": "BaseBdev4", 00:15:30.971 "uuid": "9bca8309-f921-573d-ad39-0aa49b7d84a4", 00:15:30.971 "is_configured": true, 00:15:30.971 "data_offset": 2048, 00:15:30.971 "data_size": 63488 00:15:30.971 } 00:15:30.971 ] 00:15:30.971 }' 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.971 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79020 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79020 ']' 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79020 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79020 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:31.231 killing process with pid 79020 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79020' 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79020 00:15:31.231 Received shutdown signal, test time was about 17.574266 seconds 00:15:31.231 00:15:31.231 Latency(us) 00:15:31.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.231 =================================================================================================================== 00:15:31.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.231 [2024-09-30 12:32:42.925285] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.231 [2024-09-30 12:32:42.925376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.231 [2024-09-30 12:32:42.925448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.231 [2024-09-30 12:32:42.925458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:31.231 12:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79020 00:15:31.491 [2024-09-30 12:32:43.322106] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.873 12:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:32.873 00:15:32.873 real 0m21.114s 00:15:32.873 user 0m27.589s 00:15:32.873 sys 0m2.555s 00:15:32.873 12:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.873 12:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.873 ************************************ 00:15:32.873 END TEST raid_rebuild_test_sb_io 00:15:32.873 ************************************ 00:15:32.873 12:32:44 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:32.873 12:32:44 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:32.873 12:32:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:32.873 12:32:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.873 12:32:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.873 ************************************ 00:15:32.873 START TEST raid5f_state_function_test 00:15:32.873 ************************************ 00:15:32.873 12:32:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:15:32.873 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:32.873 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:32.873 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:32.873 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:32.873 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:32.873 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.873 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:32.873 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.873 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79742 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:32.874 Process raid pid: 79742 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79742' 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79742 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 79742 ']' 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.874 12:32:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.874 [2024-09-30 12:32:44.757330] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:32.874 [2024-09-30 12:32:44.757480] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.134 [2024-09-30 12:32:44.929524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.393 [2024-09-30 12:32:45.128130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.653 [2024-09-30 12:32:45.327785] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.653 [2024-09-30 12:32:45.327869] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.914 [2024-09-30 12:32:45.565433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.914 [2024-09-30 12:32:45.565487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.914 [2024-09-30 12:32:45.565497] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.914 [2024-09-30 12:32:45.565506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.914 [2024-09-30 12:32:45.565512] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.914 [2024-09-30 12:32:45.565522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.914 "name": "Existed_Raid", 00:15:33.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.914 "strip_size_kb": 64, 00:15:33.914 "state": "configuring", 00:15:33.914 "raid_level": "raid5f", 00:15:33.914 "superblock": false, 00:15:33.914 "num_base_bdevs": 3, 00:15:33.914 "num_base_bdevs_discovered": 0, 00:15:33.914 "num_base_bdevs_operational": 3, 00:15:33.914 "base_bdevs_list": [ 00:15:33.914 { 00:15:33.914 "name": "BaseBdev1", 00:15:33.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.914 "is_configured": false, 00:15:33.914 "data_offset": 0, 00:15:33.914 "data_size": 0 00:15:33.914 }, 00:15:33.914 { 00:15:33.914 "name": "BaseBdev2", 00:15:33.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.914 "is_configured": false, 00:15:33.914 "data_offset": 0, 00:15:33.914 "data_size": 0 00:15:33.914 }, 00:15:33.914 { 00:15:33.914 "name": "BaseBdev3", 00:15:33.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.914 "is_configured": false, 00:15:33.914 "data_offset": 0, 00:15:33.914 "data_size": 0 00:15:33.914 } 00:15:33.914 ] 00:15:33.914 }' 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.914 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.174 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.174 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.174 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.174 [2024-09-30 12:32:45.988605] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.174 [2024-09-30 12:32:45.988679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:34.174 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.174 12:32:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:34.174 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.174 12:32:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.174 [2024-09-30 12:32:46.000610] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.174 [2024-09-30 12:32:46.000686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.174 [2024-09-30 12:32:46.000711] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.174 [2024-09-30 12:32:46.000733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.175 [2024-09-30 12:32:46.000765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.175 [2024-09-30 12:32:46.000786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.175 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.175 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.175 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.175 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.434 [2024-09-30 12:32:46.078998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.434 BaseBdev1 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.434 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.434 [ 00:15:34.434 { 00:15:34.434 "name": "BaseBdev1", 00:15:34.434 "aliases": [ 00:15:34.434 "f1d8c6f8-bd95-43c0-8e22-8090be6701aa" 00:15:34.434 ], 00:15:34.434 "product_name": "Malloc disk", 00:15:34.434 "block_size": 512, 00:15:34.434 "num_blocks": 65536, 00:15:34.434 "uuid": "f1d8c6f8-bd95-43c0-8e22-8090be6701aa", 00:15:34.434 "assigned_rate_limits": { 00:15:34.434 "rw_ios_per_sec": 0, 00:15:34.434 "rw_mbytes_per_sec": 0, 00:15:34.434 "r_mbytes_per_sec": 0, 00:15:34.434 "w_mbytes_per_sec": 0 00:15:34.434 }, 00:15:34.434 "claimed": true, 00:15:34.434 "claim_type": "exclusive_write", 00:15:34.434 "zoned": false, 00:15:34.434 "supported_io_types": { 00:15:34.434 "read": true, 00:15:34.434 "write": true, 00:15:34.434 "unmap": true, 00:15:34.434 "flush": true, 00:15:34.434 "reset": true, 00:15:34.434 "nvme_admin": false, 00:15:34.434 "nvme_io": false, 00:15:34.434 "nvme_io_md": false, 00:15:34.434 "write_zeroes": true, 00:15:34.434 "zcopy": true, 00:15:34.434 "get_zone_info": false, 00:15:34.434 "zone_management": false, 00:15:34.434 "zone_append": false, 00:15:34.434 "compare": false, 00:15:34.434 "compare_and_write": false, 00:15:34.434 "abort": true, 00:15:34.434 "seek_hole": false, 00:15:34.434 "seek_data": false, 00:15:34.434 "copy": true, 00:15:34.434 "nvme_iov_md": false 00:15:34.434 }, 00:15:34.434 "memory_domains": [ 00:15:34.434 { 00:15:34.434 "dma_device_id": "system", 00:15:34.434 "dma_device_type": 1 00:15:34.434 }, 00:15:34.434 { 00:15:34.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.434 "dma_device_type": 2 00:15:34.434 } 00:15:34.434 ], 00:15:34.434 "driver_specific": {} 00:15:34.434 } 00:15:34.434 ] 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.435 "name": "Existed_Raid", 00:15:34.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.435 "strip_size_kb": 64, 00:15:34.435 "state": "configuring", 00:15:34.435 "raid_level": "raid5f", 00:15:34.435 "superblock": false, 00:15:34.435 "num_base_bdevs": 3, 00:15:34.435 "num_base_bdevs_discovered": 1, 00:15:34.435 "num_base_bdevs_operational": 3, 00:15:34.435 "base_bdevs_list": [ 00:15:34.435 { 00:15:34.435 "name": "BaseBdev1", 00:15:34.435 "uuid": "f1d8c6f8-bd95-43c0-8e22-8090be6701aa", 00:15:34.435 "is_configured": true, 00:15:34.435 "data_offset": 0, 00:15:34.435 "data_size": 65536 00:15:34.435 }, 00:15:34.435 { 00:15:34.435 "name": "BaseBdev2", 00:15:34.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.435 "is_configured": false, 00:15:34.435 "data_offset": 0, 00:15:34.435 "data_size": 0 00:15:34.435 }, 00:15:34.435 { 00:15:34.435 "name": "BaseBdev3", 00:15:34.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.435 "is_configured": false, 00:15:34.435 "data_offset": 0, 00:15:34.435 "data_size": 0 00:15:34.435 } 00:15:34.435 ] 00:15:34.435 }' 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.435 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.693 [2024-09-30 12:32:46.554184] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.693 [2024-09-30 12:32:46.554220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.693 [2024-09-30 12:32:46.562208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.693 [2024-09-30 12:32:46.563890] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.693 [2024-09-30 12:32:46.563970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.693 [2024-09-30 12:32:46.563982] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.693 [2024-09-30 12:32:46.563991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.693 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.694 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.694 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.694 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.694 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.694 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.694 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.694 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.694 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.694 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.952 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.952 "name": "Existed_Raid", 00:15:34.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.952 "strip_size_kb": 64, 00:15:34.952 "state": "configuring", 00:15:34.952 "raid_level": "raid5f", 00:15:34.952 "superblock": false, 00:15:34.952 "num_base_bdevs": 3, 00:15:34.952 "num_base_bdevs_discovered": 1, 00:15:34.952 "num_base_bdevs_operational": 3, 00:15:34.952 "base_bdevs_list": [ 00:15:34.952 { 00:15:34.952 "name": "BaseBdev1", 00:15:34.952 "uuid": "f1d8c6f8-bd95-43c0-8e22-8090be6701aa", 00:15:34.952 "is_configured": true, 00:15:34.952 "data_offset": 0, 00:15:34.952 "data_size": 65536 00:15:34.952 }, 00:15:34.952 { 00:15:34.952 "name": "BaseBdev2", 00:15:34.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.952 "is_configured": false, 00:15:34.952 "data_offset": 0, 00:15:34.952 "data_size": 0 00:15:34.952 }, 00:15:34.952 { 00:15:34.952 "name": "BaseBdev3", 00:15:34.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.953 "is_configured": false, 00:15:34.953 "data_offset": 0, 00:15:34.953 "data_size": 0 00:15:34.953 } 00:15:34.953 ] 00:15:34.953 }' 00:15:34.953 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.953 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.213 [2024-09-30 12:32:46.989600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.213 BaseBdev2 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.213 12:32:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.213 [ 00:15:35.213 { 00:15:35.213 "name": "BaseBdev2", 00:15:35.213 "aliases": [ 00:15:35.213 "85222e39-a0ad-4fcd-b323-64b8bce79a8f" 00:15:35.213 ], 00:15:35.213 "product_name": "Malloc disk", 00:15:35.213 "block_size": 512, 00:15:35.213 "num_blocks": 65536, 00:15:35.213 "uuid": "85222e39-a0ad-4fcd-b323-64b8bce79a8f", 00:15:35.213 "assigned_rate_limits": { 00:15:35.213 "rw_ios_per_sec": 0, 00:15:35.213 "rw_mbytes_per_sec": 0, 00:15:35.213 "r_mbytes_per_sec": 0, 00:15:35.213 "w_mbytes_per_sec": 0 00:15:35.213 }, 00:15:35.213 "claimed": true, 00:15:35.213 "claim_type": "exclusive_write", 00:15:35.213 "zoned": false, 00:15:35.213 "supported_io_types": { 00:15:35.213 "read": true, 00:15:35.213 "write": true, 00:15:35.213 "unmap": true, 00:15:35.213 "flush": true, 00:15:35.213 "reset": true, 00:15:35.213 "nvme_admin": false, 00:15:35.213 "nvme_io": false, 00:15:35.213 "nvme_io_md": false, 00:15:35.213 "write_zeroes": true, 00:15:35.213 "zcopy": true, 00:15:35.213 "get_zone_info": false, 00:15:35.213 "zone_management": false, 00:15:35.213 "zone_append": false, 00:15:35.213 "compare": false, 00:15:35.213 "compare_and_write": false, 00:15:35.213 "abort": true, 00:15:35.213 "seek_hole": false, 00:15:35.213 "seek_data": false, 00:15:35.213 "copy": true, 00:15:35.213 "nvme_iov_md": false 00:15:35.213 }, 00:15:35.213 "memory_domains": [ 00:15:35.213 { 00:15:35.213 "dma_device_id": "system", 00:15:35.213 "dma_device_type": 1 00:15:35.213 }, 00:15:35.213 { 00:15:35.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.213 "dma_device_type": 2 00:15:35.213 } 00:15:35.213 ], 00:15:35.213 "driver_specific": {} 00:15:35.213 } 00:15:35.213 ] 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.213 "name": "Existed_Raid", 00:15:35.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.213 "strip_size_kb": 64, 00:15:35.213 "state": "configuring", 00:15:35.213 "raid_level": "raid5f", 00:15:35.213 "superblock": false, 00:15:35.213 "num_base_bdevs": 3, 00:15:35.213 "num_base_bdevs_discovered": 2, 00:15:35.213 "num_base_bdevs_operational": 3, 00:15:35.213 "base_bdevs_list": [ 00:15:35.213 { 00:15:35.213 "name": "BaseBdev1", 00:15:35.213 "uuid": "f1d8c6f8-bd95-43c0-8e22-8090be6701aa", 00:15:35.213 "is_configured": true, 00:15:35.213 "data_offset": 0, 00:15:35.213 "data_size": 65536 00:15:35.213 }, 00:15:35.213 { 00:15:35.213 "name": "BaseBdev2", 00:15:35.213 "uuid": "85222e39-a0ad-4fcd-b323-64b8bce79a8f", 00:15:35.213 "is_configured": true, 00:15:35.213 "data_offset": 0, 00:15:35.213 "data_size": 65536 00:15:35.213 }, 00:15:35.213 { 00:15:35.213 "name": "BaseBdev3", 00:15:35.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.213 "is_configured": false, 00:15:35.213 "data_offset": 0, 00:15:35.213 "data_size": 0 00:15:35.213 } 00:15:35.213 ] 00:15:35.213 }' 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.213 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.783 [2024-09-30 12:32:47.501320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.783 [2024-09-30 12:32:47.501372] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:35.783 [2024-09-30 12:32:47.501390] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:35.783 [2024-09-30 12:32:47.501621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:35.783 [2024-09-30 12:32:47.506708] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:35.783 [2024-09-30 12:32:47.506730] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:35.783 [2024-09-30 12:32:47.507001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.783 BaseBdev3 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.783 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.784 [ 00:15:35.784 { 00:15:35.784 "name": "BaseBdev3", 00:15:35.784 "aliases": [ 00:15:35.784 "499931b5-b4e7-4efc-9e84-7184472f371a" 00:15:35.784 ], 00:15:35.784 "product_name": "Malloc disk", 00:15:35.784 "block_size": 512, 00:15:35.784 "num_blocks": 65536, 00:15:35.784 "uuid": "499931b5-b4e7-4efc-9e84-7184472f371a", 00:15:35.784 "assigned_rate_limits": { 00:15:35.784 "rw_ios_per_sec": 0, 00:15:35.784 "rw_mbytes_per_sec": 0, 00:15:35.784 "r_mbytes_per_sec": 0, 00:15:35.784 "w_mbytes_per_sec": 0 00:15:35.784 }, 00:15:35.784 "claimed": true, 00:15:35.784 "claim_type": "exclusive_write", 00:15:35.784 "zoned": false, 00:15:35.784 "supported_io_types": { 00:15:35.784 "read": true, 00:15:35.784 "write": true, 00:15:35.784 "unmap": true, 00:15:35.784 "flush": true, 00:15:35.784 "reset": true, 00:15:35.784 "nvme_admin": false, 00:15:35.784 "nvme_io": false, 00:15:35.784 "nvme_io_md": false, 00:15:35.784 "write_zeroes": true, 00:15:35.784 "zcopy": true, 00:15:35.784 "get_zone_info": false, 00:15:35.784 "zone_management": false, 00:15:35.784 "zone_append": false, 00:15:35.784 "compare": false, 00:15:35.784 "compare_and_write": false, 00:15:35.784 "abort": true, 00:15:35.784 "seek_hole": false, 00:15:35.784 "seek_data": false, 00:15:35.784 "copy": true, 00:15:35.784 "nvme_iov_md": false 00:15:35.784 }, 00:15:35.784 "memory_domains": [ 00:15:35.784 { 00:15:35.784 "dma_device_id": "system", 00:15:35.784 "dma_device_type": 1 00:15:35.784 }, 00:15:35.784 { 00:15:35.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.784 "dma_device_type": 2 00:15:35.784 } 00:15:35.784 ], 00:15:35.784 "driver_specific": {} 00:15:35.784 } 00:15:35.784 ] 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.784 "name": "Existed_Raid", 00:15:35.784 "uuid": "d2fa42da-9b05-4b82-8194-1c92062bf09e", 00:15:35.784 "strip_size_kb": 64, 00:15:35.784 "state": "online", 00:15:35.784 "raid_level": "raid5f", 00:15:35.784 "superblock": false, 00:15:35.784 "num_base_bdevs": 3, 00:15:35.784 "num_base_bdevs_discovered": 3, 00:15:35.784 "num_base_bdevs_operational": 3, 00:15:35.784 "base_bdevs_list": [ 00:15:35.784 { 00:15:35.784 "name": "BaseBdev1", 00:15:35.784 "uuid": "f1d8c6f8-bd95-43c0-8e22-8090be6701aa", 00:15:35.784 "is_configured": true, 00:15:35.784 "data_offset": 0, 00:15:35.784 "data_size": 65536 00:15:35.784 }, 00:15:35.784 { 00:15:35.784 "name": "BaseBdev2", 00:15:35.784 "uuid": "85222e39-a0ad-4fcd-b323-64b8bce79a8f", 00:15:35.784 "is_configured": true, 00:15:35.784 "data_offset": 0, 00:15:35.784 "data_size": 65536 00:15:35.784 }, 00:15:35.784 { 00:15:35.784 "name": "BaseBdev3", 00:15:35.784 "uuid": "499931b5-b4e7-4efc-9e84-7184472f371a", 00:15:35.784 "is_configured": true, 00:15:35.784 "data_offset": 0, 00:15:35.784 "data_size": 65536 00:15:35.784 } 00:15:35.784 ] 00:15:35.784 }' 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.784 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.355 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.355 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:36.355 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.355 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.355 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.355 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.355 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:36.355 12:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.355 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.355 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.355 [2024-09-30 12:32:47.968194] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.355 12:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.355 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.355 "name": "Existed_Raid", 00:15:36.355 "aliases": [ 00:15:36.355 "d2fa42da-9b05-4b82-8194-1c92062bf09e" 00:15:36.355 ], 00:15:36.355 "product_name": "Raid Volume", 00:15:36.355 "block_size": 512, 00:15:36.355 "num_blocks": 131072, 00:15:36.355 "uuid": "d2fa42da-9b05-4b82-8194-1c92062bf09e", 00:15:36.355 "assigned_rate_limits": { 00:15:36.355 "rw_ios_per_sec": 0, 00:15:36.355 "rw_mbytes_per_sec": 0, 00:15:36.355 "r_mbytes_per_sec": 0, 00:15:36.355 "w_mbytes_per_sec": 0 00:15:36.355 }, 00:15:36.355 "claimed": false, 00:15:36.355 "zoned": false, 00:15:36.355 "supported_io_types": { 00:15:36.355 "read": true, 00:15:36.355 "write": true, 00:15:36.355 "unmap": false, 00:15:36.355 "flush": false, 00:15:36.355 "reset": true, 00:15:36.355 "nvme_admin": false, 00:15:36.355 "nvme_io": false, 00:15:36.355 "nvme_io_md": false, 00:15:36.355 "write_zeroes": true, 00:15:36.355 "zcopy": false, 00:15:36.355 "get_zone_info": false, 00:15:36.355 "zone_management": false, 00:15:36.355 "zone_append": false, 00:15:36.355 "compare": false, 00:15:36.355 "compare_and_write": false, 00:15:36.355 "abort": false, 00:15:36.355 "seek_hole": false, 00:15:36.355 "seek_data": false, 00:15:36.355 "copy": false, 00:15:36.355 "nvme_iov_md": false 00:15:36.355 }, 00:15:36.355 "driver_specific": { 00:15:36.355 "raid": { 00:15:36.355 "uuid": "d2fa42da-9b05-4b82-8194-1c92062bf09e", 00:15:36.355 "strip_size_kb": 64, 00:15:36.355 "state": "online", 00:15:36.355 "raid_level": "raid5f", 00:15:36.355 "superblock": false, 00:15:36.355 "num_base_bdevs": 3, 00:15:36.355 "num_base_bdevs_discovered": 3, 00:15:36.355 "num_base_bdevs_operational": 3, 00:15:36.355 "base_bdevs_list": [ 00:15:36.355 { 00:15:36.355 "name": "BaseBdev1", 00:15:36.355 "uuid": "f1d8c6f8-bd95-43c0-8e22-8090be6701aa", 00:15:36.355 "is_configured": true, 00:15:36.355 "data_offset": 0, 00:15:36.355 "data_size": 65536 00:15:36.355 }, 00:15:36.355 { 00:15:36.355 "name": "BaseBdev2", 00:15:36.355 "uuid": "85222e39-a0ad-4fcd-b323-64b8bce79a8f", 00:15:36.355 "is_configured": true, 00:15:36.355 "data_offset": 0, 00:15:36.355 "data_size": 65536 00:15:36.355 }, 00:15:36.355 { 00:15:36.355 "name": "BaseBdev3", 00:15:36.355 "uuid": "499931b5-b4e7-4efc-9e84-7184472f371a", 00:15:36.355 "is_configured": true, 00:15:36.355 "data_offset": 0, 00:15:36.355 "data_size": 65536 00:15:36.355 } 00:15:36.355 ] 00:15:36.355 } 00:15:36.355 } 00:15:36.355 }' 00:15:36.355 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.355 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:36.355 BaseBdev2 00:15:36.355 BaseBdev3' 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.356 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.356 [2024-09-30 12:32:48.235571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.616 "name": "Existed_Raid", 00:15:36.616 "uuid": "d2fa42da-9b05-4b82-8194-1c92062bf09e", 00:15:36.616 "strip_size_kb": 64, 00:15:36.616 "state": "online", 00:15:36.616 "raid_level": "raid5f", 00:15:36.616 "superblock": false, 00:15:36.616 "num_base_bdevs": 3, 00:15:36.616 "num_base_bdevs_discovered": 2, 00:15:36.616 "num_base_bdevs_operational": 2, 00:15:36.616 "base_bdevs_list": [ 00:15:36.616 { 00:15:36.616 "name": null, 00:15:36.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.616 "is_configured": false, 00:15:36.616 "data_offset": 0, 00:15:36.616 "data_size": 65536 00:15:36.616 }, 00:15:36.616 { 00:15:36.616 "name": "BaseBdev2", 00:15:36.616 "uuid": "85222e39-a0ad-4fcd-b323-64b8bce79a8f", 00:15:36.616 "is_configured": true, 00:15:36.616 "data_offset": 0, 00:15:36.616 "data_size": 65536 00:15:36.616 }, 00:15:36.616 { 00:15:36.616 "name": "BaseBdev3", 00:15:36.616 "uuid": "499931b5-b4e7-4efc-9e84-7184472f371a", 00:15:36.616 "is_configured": true, 00:15:36.616 "data_offset": 0, 00:15:36.616 "data_size": 65536 00:15:36.616 } 00:15:36.616 ] 00:15:36.616 }' 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.616 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.186 [2024-09-30 12:32:48.854960] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.186 [2024-09-30 12:32:48.855105] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.186 [2024-09-30 12:32:48.941229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.186 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.187 12:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.187 [2024-09-30 12:32:49.001142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.187 [2024-09-30 12:32:49.001230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.446 BaseBdev2 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.446 [ 00:15:37.446 { 00:15:37.446 "name": "BaseBdev2", 00:15:37.446 "aliases": [ 00:15:37.446 "3146dfd4-d7ab-4f81-8825-6196cc6e4f7f" 00:15:37.446 ], 00:15:37.446 "product_name": "Malloc disk", 00:15:37.446 "block_size": 512, 00:15:37.446 "num_blocks": 65536, 00:15:37.446 "uuid": "3146dfd4-d7ab-4f81-8825-6196cc6e4f7f", 00:15:37.446 "assigned_rate_limits": { 00:15:37.446 "rw_ios_per_sec": 0, 00:15:37.446 "rw_mbytes_per_sec": 0, 00:15:37.446 "r_mbytes_per_sec": 0, 00:15:37.446 "w_mbytes_per_sec": 0 00:15:37.446 }, 00:15:37.446 "claimed": false, 00:15:37.446 "zoned": false, 00:15:37.446 "supported_io_types": { 00:15:37.446 "read": true, 00:15:37.446 "write": true, 00:15:37.446 "unmap": true, 00:15:37.446 "flush": true, 00:15:37.446 "reset": true, 00:15:37.446 "nvme_admin": false, 00:15:37.446 "nvme_io": false, 00:15:37.446 "nvme_io_md": false, 00:15:37.446 "write_zeroes": true, 00:15:37.446 "zcopy": true, 00:15:37.446 "get_zone_info": false, 00:15:37.446 "zone_management": false, 00:15:37.446 "zone_append": false, 00:15:37.446 "compare": false, 00:15:37.446 "compare_and_write": false, 00:15:37.446 "abort": true, 00:15:37.446 "seek_hole": false, 00:15:37.446 "seek_data": false, 00:15:37.446 "copy": true, 00:15:37.446 "nvme_iov_md": false 00:15:37.446 }, 00:15:37.446 "memory_domains": [ 00:15:37.446 { 00:15:37.446 "dma_device_id": "system", 00:15:37.446 "dma_device_type": 1 00:15:37.446 }, 00:15:37.446 { 00:15:37.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.446 "dma_device_type": 2 00:15:37.446 } 00:15:37.446 ], 00:15:37.446 "driver_specific": {} 00:15:37.446 } 00:15:37.446 ] 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.446 BaseBdev3 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.446 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.447 [ 00:15:37.447 { 00:15:37.447 "name": "BaseBdev3", 00:15:37.447 "aliases": [ 00:15:37.447 "85193fbc-dd7c-4770-86bc-5940c68b3322" 00:15:37.447 ], 00:15:37.447 "product_name": "Malloc disk", 00:15:37.447 "block_size": 512, 00:15:37.447 "num_blocks": 65536, 00:15:37.447 "uuid": "85193fbc-dd7c-4770-86bc-5940c68b3322", 00:15:37.447 "assigned_rate_limits": { 00:15:37.447 "rw_ios_per_sec": 0, 00:15:37.447 "rw_mbytes_per_sec": 0, 00:15:37.447 "r_mbytes_per_sec": 0, 00:15:37.447 "w_mbytes_per_sec": 0 00:15:37.447 }, 00:15:37.447 "claimed": false, 00:15:37.447 "zoned": false, 00:15:37.447 "supported_io_types": { 00:15:37.447 "read": true, 00:15:37.447 "write": true, 00:15:37.447 "unmap": true, 00:15:37.447 "flush": true, 00:15:37.447 "reset": true, 00:15:37.447 "nvme_admin": false, 00:15:37.447 "nvme_io": false, 00:15:37.447 "nvme_io_md": false, 00:15:37.447 "write_zeroes": true, 00:15:37.447 "zcopy": true, 00:15:37.447 "get_zone_info": false, 00:15:37.447 "zone_management": false, 00:15:37.447 "zone_append": false, 00:15:37.447 "compare": false, 00:15:37.447 "compare_and_write": false, 00:15:37.447 "abort": true, 00:15:37.447 "seek_hole": false, 00:15:37.447 "seek_data": false, 00:15:37.447 "copy": true, 00:15:37.447 "nvme_iov_md": false 00:15:37.447 }, 00:15:37.447 "memory_domains": [ 00:15:37.447 { 00:15:37.447 "dma_device_id": "system", 00:15:37.447 "dma_device_type": 1 00:15:37.447 }, 00:15:37.447 { 00:15:37.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.447 "dma_device_type": 2 00:15:37.447 } 00:15:37.447 ], 00:15:37.447 "driver_specific": {} 00:15:37.447 } 00:15:37.447 ] 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.447 [2024-09-30 12:32:49.307606] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.447 [2024-09-30 12:32:49.307731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.447 [2024-09-30 12:32:49.307785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.447 [2024-09-30 12:32:49.309452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.447 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.707 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.707 "name": "Existed_Raid", 00:15:37.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.707 "strip_size_kb": 64, 00:15:37.707 "state": "configuring", 00:15:37.707 "raid_level": "raid5f", 00:15:37.707 "superblock": false, 00:15:37.707 "num_base_bdevs": 3, 00:15:37.707 "num_base_bdevs_discovered": 2, 00:15:37.707 "num_base_bdevs_operational": 3, 00:15:37.707 "base_bdevs_list": [ 00:15:37.707 { 00:15:37.707 "name": "BaseBdev1", 00:15:37.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.707 "is_configured": false, 00:15:37.707 "data_offset": 0, 00:15:37.707 "data_size": 0 00:15:37.707 }, 00:15:37.707 { 00:15:37.707 "name": "BaseBdev2", 00:15:37.707 "uuid": "3146dfd4-d7ab-4f81-8825-6196cc6e4f7f", 00:15:37.707 "is_configured": true, 00:15:37.707 "data_offset": 0, 00:15:37.707 "data_size": 65536 00:15:37.707 }, 00:15:37.707 { 00:15:37.707 "name": "BaseBdev3", 00:15:37.707 "uuid": "85193fbc-dd7c-4770-86bc-5940c68b3322", 00:15:37.707 "is_configured": true, 00:15:37.707 "data_offset": 0, 00:15:37.707 "data_size": 65536 00:15:37.707 } 00:15:37.707 ] 00:15:37.707 }' 00:15:37.707 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.707 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.970 [2024-09-30 12:32:49.787123] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.970 "name": "Existed_Raid", 00:15:37.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.970 "strip_size_kb": 64, 00:15:37.970 "state": "configuring", 00:15:37.970 "raid_level": "raid5f", 00:15:37.970 "superblock": false, 00:15:37.970 "num_base_bdevs": 3, 00:15:37.970 "num_base_bdevs_discovered": 1, 00:15:37.970 "num_base_bdevs_operational": 3, 00:15:37.970 "base_bdevs_list": [ 00:15:37.970 { 00:15:37.970 "name": "BaseBdev1", 00:15:37.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.970 "is_configured": false, 00:15:37.970 "data_offset": 0, 00:15:37.970 "data_size": 0 00:15:37.970 }, 00:15:37.970 { 00:15:37.970 "name": null, 00:15:37.970 "uuid": "3146dfd4-d7ab-4f81-8825-6196cc6e4f7f", 00:15:37.970 "is_configured": false, 00:15:37.970 "data_offset": 0, 00:15:37.970 "data_size": 65536 00:15:37.970 }, 00:15:37.970 { 00:15:37.970 "name": "BaseBdev3", 00:15:37.970 "uuid": "85193fbc-dd7c-4770-86bc-5940c68b3322", 00:15:37.970 "is_configured": true, 00:15:37.970 "data_offset": 0, 00:15:37.970 "data_size": 65536 00:15:37.970 } 00:15:37.970 ] 00:15:37.970 }' 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.970 12:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.563 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:38.563 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.563 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.563 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.563 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.563 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:38.563 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.563 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.563 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.564 [2024-09-30 12:32:50.296535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.564 BaseBdev1 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.564 [ 00:15:38.564 { 00:15:38.564 "name": "BaseBdev1", 00:15:38.564 "aliases": [ 00:15:38.564 "20e7d7d2-cff2-4535-b21b-4188a3ab033c" 00:15:38.564 ], 00:15:38.564 "product_name": "Malloc disk", 00:15:38.564 "block_size": 512, 00:15:38.564 "num_blocks": 65536, 00:15:38.564 "uuid": "20e7d7d2-cff2-4535-b21b-4188a3ab033c", 00:15:38.564 "assigned_rate_limits": { 00:15:38.564 "rw_ios_per_sec": 0, 00:15:38.564 "rw_mbytes_per_sec": 0, 00:15:38.564 "r_mbytes_per_sec": 0, 00:15:38.564 "w_mbytes_per_sec": 0 00:15:38.564 }, 00:15:38.564 "claimed": true, 00:15:38.564 "claim_type": "exclusive_write", 00:15:38.564 "zoned": false, 00:15:38.564 "supported_io_types": { 00:15:38.564 "read": true, 00:15:38.564 "write": true, 00:15:38.564 "unmap": true, 00:15:38.564 "flush": true, 00:15:38.564 "reset": true, 00:15:38.564 "nvme_admin": false, 00:15:38.564 "nvme_io": false, 00:15:38.564 "nvme_io_md": false, 00:15:38.564 "write_zeroes": true, 00:15:38.564 "zcopy": true, 00:15:38.564 "get_zone_info": false, 00:15:38.564 "zone_management": false, 00:15:38.564 "zone_append": false, 00:15:38.564 "compare": false, 00:15:38.564 "compare_and_write": false, 00:15:38.564 "abort": true, 00:15:38.564 "seek_hole": false, 00:15:38.564 "seek_data": false, 00:15:38.564 "copy": true, 00:15:38.564 "nvme_iov_md": false 00:15:38.564 }, 00:15:38.564 "memory_domains": [ 00:15:38.564 { 00:15:38.564 "dma_device_id": "system", 00:15:38.564 "dma_device_type": 1 00:15:38.564 }, 00:15:38.564 { 00:15:38.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.564 "dma_device_type": 2 00:15:38.564 } 00:15:38.564 ], 00:15:38.564 "driver_specific": {} 00:15:38.564 } 00:15:38.564 ] 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.564 "name": "Existed_Raid", 00:15:38.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.564 "strip_size_kb": 64, 00:15:38.564 "state": "configuring", 00:15:38.564 "raid_level": "raid5f", 00:15:38.564 "superblock": false, 00:15:38.564 "num_base_bdevs": 3, 00:15:38.564 "num_base_bdevs_discovered": 2, 00:15:38.564 "num_base_bdevs_operational": 3, 00:15:38.564 "base_bdevs_list": [ 00:15:38.564 { 00:15:38.564 "name": "BaseBdev1", 00:15:38.564 "uuid": "20e7d7d2-cff2-4535-b21b-4188a3ab033c", 00:15:38.564 "is_configured": true, 00:15:38.564 "data_offset": 0, 00:15:38.564 "data_size": 65536 00:15:38.564 }, 00:15:38.564 { 00:15:38.564 "name": null, 00:15:38.564 "uuid": "3146dfd4-d7ab-4f81-8825-6196cc6e4f7f", 00:15:38.564 "is_configured": false, 00:15:38.564 "data_offset": 0, 00:15:38.564 "data_size": 65536 00:15:38.564 }, 00:15:38.564 { 00:15:38.564 "name": "BaseBdev3", 00:15:38.564 "uuid": "85193fbc-dd7c-4770-86bc-5940c68b3322", 00:15:38.564 "is_configured": true, 00:15:38.564 "data_offset": 0, 00:15:38.564 "data_size": 65536 00:15:38.564 } 00:15:38.564 ] 00:15:38.564 }' 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.564 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.163 [2024-09-30 12:32:50.859620] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.163 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.163 "name": "Existed_Raid", 00:15:39.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.163 "strip_size_kb": 64, 00:15:39.163 "state": "configuring", 00:15:39.163 "raid_level": "raid5f", 00:15:39.163 "superblock": false, 00:15:39.163 "num_base_bdevs": 3, 00:15:39.163 "num_base_bdevs_discovered": 1, 00:15:39.163 "num_base_bdevs_operational": 3, 00:15:39.163 "base_bdevs_list": [ 00:15:39.163 { 00:15:39.163 "name": "BaseBdev1", 00:15:39.163 "uuid": "20e7d7d2-cff2-4535-b21b-4188a3ab033c", 00:15:39.163 "is_configured": true, 00:15:39.163 "data_offset": 0, 00:15:39.163 "data_size": 65536 00:15:39.163 }, 00:15:39.163 { 00:15:39.163 "name": null, 00:15:39.163 "uuid": "3146dfd4-d7ab-4f81-8825-6196cc6e4f7f", 00:15:39.164 "is_configured": false, 00:15:39.164 "data_offset": 0, 00:15:39.164 "data_size": 65536 00:15:39.164 }, 00:15:39.164 { 00:15:39.164 "name": null, 00:15:39.164 "uuid": "85193fbc-dd7c-4770-86bc-5940c68b3322", 00:15:39.164 "is_configured": false, 00:15:39.164 "data_offset": 0, 00:15:39.164 "data_size": 65536 00:15:39.164 } 00:15:39.164 ] 00:15:39.164 }' 00:15:39.164 12:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.164 12:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.424 [2024-09-30 12:32:51.270947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.424 "name": "Existed_Raid", 00:15:39.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.424 "strip_size_kb": 64, 00:15:39.424 "state": "configuring", 00:15:39.424 "raid_level": "raid5f", 00:15:39.424 "superblock": false, 00:15:39.424 "num_base_bdevs": 3, 00:15:39.424 "num_base_bdevs_discovered": 2, 00:15:39.424 "num_base_bdevs_operational": 3, 00:15:39.424 "base_bdevs_list": [ 00:15:39.424 { 00:15:39.424 "name": "BaseBdev1", 00:15:39.424 "uuid": "20e7d7d2-cff2-4535-b21b-4188a3ab033c", 00:15:39.424 "is_configured": true, 00:15:39.424 "data_offset": 0, 00:15:39.424 "data_size": 65536 00:15:39.424 }, 00:15:39.424 { 00:15:39.424 "name": null, 00:15:39.424 "uuid": "3146dfd4-d7ab-4f81-8825-6196cc6e4f7f", 00:15:39.424 "is_configured": false, 00:15:39.424 "data_offset": 0, 00:15:39.424 "data_size": 65536 00:15:39.424 }, 00:15:39.424 { 00:15:39.424 "name": "BaseBdev3", 00:15:39.424 "uuid": "85193fbc-dd7c-4770-86bc-5940c68b3322", 00:15:39.424 "is_configured": true, 00:15:39.424 "data_offset": 0, 00:15:39.424 "data_size": 65536 00:15:39.424 } 00:15:39.424 ] 00:15:39.424 }' 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.424 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.994 [2024-09-30 12:32:51.782093] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.994 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.254 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.254 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.254 "name": "Existed_Raid", 00:15:40.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.254 "strip_size_kb": 64, 00:15:40.254 "state": "configuring", 00:15:40.254 "raid_level": "raid5f", 00:15:40.254 "superblock": false, 00:15:40.254 "num_base_bdevs": 3, 00:15:40.254 "num_base_bdevs_discovered": 1, 00:15:40.254 "num_base_bdevs_operational": 3, 00:15:40.254 "base_bdevs_list": [ 00:15:40.254 { 00:15:40.254 "name": null, 00:15:40.254 "uuid": "20e7d7d2-cff2-4535-b21b-4188a3ab033c", 00:15:40.254 "is_configured": false, 00:15:40.254 "data_offset": 0, 00:15:40.254 "data_size": 65536 00:15:40.254 }, 00:15:40.254 { 00:15:40.254 "name": null, 00:15:40.254 "uuid": "3146dfd4-d7ab-4f81-8825-6196cc6e4f7f", 00:15:40.254 "is_configured": false, 00:15:40.254 "data_offset": 0, 00:15:40.254 "data_size": 65536 00:15:40.254 }, 00:15:40.254 { 00:15:40.254 "name": "BaseBdev3", 00:15:40.254 "uuid": "85193fbc-dd7c-4770-86bc-5940c68b3322", 00:15:40.254 "is_configured": true, 00:15:40.254 "data_offset": 0, 00:15:40.254 "data_size": 65536 00:15:40.254 } 00:15:40.254 ] 00:15:40.254 }' 00:15:40.254 12:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.254 12:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.513 [2024-09-30 12:32:52.368581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.513 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.514 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.514 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.514 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.514 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.774 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.774 "name": "Existed_Raid", 00:15:40.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.774 "strip_size_kb": 64, 00:15:40.774 "state": "configuring", 00:15:40.774 "raid_level": "raid5f", 00:15:40.774 "superblock": false, 00:15:40.774 "num_base_bdevs": 3, 00:15:40.774 "num_base_bdevs_discovered": 2, 00:15:40.774 "num_base_bdevs_operational": 3, 00:15:40.774 "base_bdevs_list": [ 00:15:40.774 { 00:15:40.774 "name": null, 00:15:40.774 "uuid": "20e7d7d2-cff2-4535-b21b-4188a3ab033c", 00:15:40.774 "is_configured": false, 00:15:40.774 "data_offset": 0, 00:15:40.774 "data_size": 65536 00:15:40.774 }, 00:15:40.774 { 00:15:40.774 "name": "BaseBdev2", 00:15:40.774 "uuid": "3146dfd4-d7ab-4f81-8825-6196cc6e4f7f", 00:15:40.774 "is_configured": true, 00:15:40.774 "data_offset": 0, 00:15:40.774 "data_size": 65536 00:15:40.774 }, 00:15:40.774 { 00:15:40.774 "name": "BaseBdev3", 00:15:40.774 "uuid": "85193fbc-dd7c-4770-86bc-5940c68b3322", 00:15:40.774 "is_configured": true, 00:15:40.774 "data_offset": 0, 00:15:40.774 "data_size": 65536 00:15:40.774 } 00:15:40.774 ] 00:15:40.774 }' 00:15:40.774 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.774 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 20e7d7d2-cff2-4535-b21b-4188a3ab033c 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.034 [2024-09-30 12:32:52.896478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:41.034 [2024-09-30 12:32:52.896577] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:41.034 [2024-09-30 12:32:52.896604] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:41.034 [2024-09-30 12:32:52.896873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:41.034 [2024-09-30 12:32:52.901524] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:41.034 [2024-09-30 12:32:52.901578] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:41.034 [2024-09-30 12:32:52.901850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.034 NewBaseBdev 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.034 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.294 [ 00:15:41.294 { 00:15:41.294 "name": "NewBaseBdev", 00:15:41.294 "aliases": [ 00:15:41.294 "20e7d7d2-cff2-4535-b21b-4188a3ab033c" 00:15:41.294 ], 00:15:41.294 "product_name": "Malloc disk", 00:15:41.294 "block_size": 512, 00:15:41.294 "num_blocks": 65536, 00:15:41.294 "uuid": "20e7d7d2-cff2-4535-b21b-4188a3ab033c", 00:15:41.294 "assigned_rate_limits": { 00:15:41.294 "rw_ios_per_sec": 0, 00:15:41.294 "rw_mbytes_per_sec": 0, 00:15:41.294 "r_mbytes_per_sec": 0, 00:15:41.294 "w_mbytes_per_sec": 0 00:15:41.294 }, 00:15:41.294 "claimed": true, 00:15:41.294 "claim_type": "exclusive_write", 00:15:41.294 "zoned": false, 00:15:41.294 "supported_io_types": { 00:15:41.294 "read": true, 00:15:41.294 "write": true, 00:15:41.294 "unmap": true, 00:15:41.294 "flush": true, 00:15:41.294 "reset": true, 00:15:41.294 "nvme_admin": false, 00:15:41.294 "nvme_io": false, 00:15:41.294 "nvme_io_md": false, 00:15:41.294 "write_zeroes": true, 00:15:41.294 "zcopy": true, 00:15:41.294 "get_zone_info": false, 00:15:41.294 "zone_management": false, 00:15:41.294 "zone_append": false, 00:15:41.294 "compare": false, 00:15:41.294 "compare_and_write": false, 00:15:41.294 "abort": true, 00:15:41.294 "seek_hole": false, 00:15:41.294 "seek_data": false, 00:15:41.294 "copy": true, 00:15:41.294 "nvme_iov_md": false 00:15:41.294 }, 00:15:41.294 "memory_domains": [ 00:15:41.294 { 00:15:41.294 "dma_device_id": "system", 00:15:41.294 "dma_device_type": 1 00:15:41.294 }, 00:15:41.294 { 00:15:41.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.294 "dma_device_type": 2 00:15:41.294 } 00:15:41.294 ], 00:15:41.294 "driver_specific": {} 00:15:41.294 } 00:15:41.294 ] 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.294 "name": "Existed_Raid", 00:15:41.294 "uuid": "3e9d7a4e-dfe3-4494-acee-0a26bf668505", 00:15:41.294 "strip_size_kb": 64, 00:15:41.294 "state": "online", 00:15:41.294 "raid_level": "raid5f", 00:15:41.294 "superblock": false, 00:15:41.294 "num_base_bdevs": 3, 00:15:41.294 "num_base_bdevs_discovered": 3, 00:15:41.294 "num_base_bdevs_operational": 3, 00:15:41.294 "base_bdevs_list": [ 00:15:41.294 { 00:15:41.294 "name": "NewBaseBdev", 00:15:41.294 "uuid": "20e7d7d2-cff2-4535-b21b-4188a3ab033c", 00:15:41.294 "is_configured": true, 00:15:41.294 "data_offset": 0, 00:15:41.294 "data_size": 65536 00:15:41.294 }, 00:15:41.294 { 00:15:41.294 "name": "BaseBdev2", 00:15:41.294 "uuid": "3146dfd4-d7ab-4f81-8825-6196cc6e4f7f", 00:15:41.294 "is_configured": true, 00:15:41.294 "data_offset": 0, 00:15:41.294 "data_size": 65536 00:15:41.294 }, 00:15:41.294 { 00:15:41.294 "name": "BaseBdev3", 00:15:41.294 "uuid": "85193fbc-dd7c-4770-86bc-5940c68b3322", 00:15:41.294 "is_configured": true, 00:15:41.294 "data_offset": 0, 00:15:41.294 "data_size": 65536 00:15:41.294 } 00:15:41.294 ] 00:15:41.294 }' 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.294 12:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.554 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:41.555 [2024-09-30 12:32:53.399158] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:41.555 "name": "Existed_Raid", 00:15:41.555 "aliases": [ 00:15:41.555 "3e9d7a4e-dfe3-4494-acee-0a26bf668505" 00:15:41.555 ], 00:15:41.555 "product_name": "Raid Volume", 00:15:41.555 "block_size": 512, 00:15:41.555 "num_blocks": 131072, 00:15:41.555 "uuid": "3e9d7a4e-dfe3-4494-acee-0a26bf668505", 00:15:41.555 "assigned_rate_limits": { 00:15:41.555 "rw_ios_per_sec": 0, 00:15:41.555 "rw_mbytes_per_sec": 0, 00:15:41.555 "r_mbytes_per_sec": 0, 00:15:41.555 "w_mbytes_per_sec": 0 00:15:41.555 }, 00:15:41.555 "claimed": false, 00:15:41.555 "zoned": false, 00:15:41.555 "supported_io_types": { 00:15:41.555 "read": true, 00:15:41.555 "write": true, 00:15:41.555 "unmap": false, 00:15:41.555 "flush": false, 00:15:41.555 "reset": true, 00:15:41.555 "nvme_admin": false, 00:15:41.555 "nvme_io": false, 00:15:41.555 "nvme_io_md": false, 00:15:41.555 "write_zeroes": true, 00:15:41.555 "zcopy": false, 00:15:41.555 "get_zone_info": false, 00:15:41.555 "zone_management": false, 00:15:41.555 "zone_append": false, 00:15:41.555 "compare": false, 00:15:41.555 "compare_and_write": false, 00:15:41.555 "abort": false, 00:15:41.555 "seek_hole": false, 00:15:41.555 "seek_data": false, 00:15:41.555 "copy": false, 00:15:41.555 "nvme_iov_md": false 00:15:41.555 }, 00:15:41.555 "driver_specific": { 00:15:41.555 "raid": { 00:15:41.555 "uuid": "3e9d7a4e-dfe3-4494-acee-0a26bf668505", 00:15:41.555 "strip_size_kb": 64, 00:15:41.555 "state": "online", 00:15:41.555 "raid_level": "raid5f", 00:15:41.555 "superblock": false, 00:15:41.555 "num_base_bdevs": 3, 00:15:41.555 "num_base_bdevs_discovered": 3, 00:15:41.555 "num_base_bdevs_operational": 3, 00:15:41.555 "base_bdevs_list": [ 00:15:41.555 { 00:15:41.555 "name": "NewBaseBdev", 00:15:41.555 "uuid": "20e7d7d2-cff2-4535-b21b-4188a3ab033c", 00:15:41.555 "is_configured": true, 00:15:41.555 "data_offset": 0, 00:15:41.555 "data_size": 65536 00:15:41.555 }, 00:15:41.555 { 00:15:41.555 "name": "BaseBdev2", 00:15:41.555 "uuid": "3146dfd4-d7ab-4f81-8825-6196cc6e4f7f", 00:15:41.555 "is_configured": true, 00:15:41.555 "data_offset": 0, 00:15:41.555 "data_size": 65536 00:15:41.555 }, 00:15:41.555 { 00:15:41.555 "name": "BaseBdev3", 00:15:41.555 "uuid": "85193fbc-dd7c-4770-86bc-5940c68b3322", 00:15:41.555 "is_configured": true, 00:15:41.555 "data_offset": 0, 00:15:41.555 "data_size": 65536 00:15:41.555 } 00:15:41.555 ] 00:15:41.555 } 00:15:41.555 } 00:15:41.555 }' 00:15:41.555 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:41.815 BaseBdev2 00:15:41.815 BaseBdev3' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.815 [2024-09-30 12:32:53.674499] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:41.815 [2024-09-30 12:32:53.674522] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.815 [2024-09-30 12:32:53.674579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.815 [2024-09-30 12:32:53.674841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.815 [2024-09-30 12:32:53.674853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79742 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 79742 ']' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 79742 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.815 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79742 00:15:42.075 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:42.075 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:42.075 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79742' 00:15:42.075 killing process with pid 79742 00:15:42.075 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 79742 00:15:42.075 [2024-09-30 12:32:53.714640] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.075 12:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 79742 00:15:42.335 [2024-09-30 12:32:53.989827] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:43.717 00:15:43.717 real 0m10.535s 00:15:43.717 user 0m16.670s 00:15:43.717 sys 0m1.891s 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.717 ************************************ 00:15:43.717 END TEST raid5f_state_function_test 00:15:43.717 ************************************ 00:15:43.717 12:32:55 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:43.717 12:32:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:43.717 12:32:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:43.717 12:32:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.717 ************************************ 00:15:43.717 START TEST raid5f_state_function_test_sb 00:15:43.717 ************************************ 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:43.717 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80358 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80358' 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:43.718 Process raid pid: 80358 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80358 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80358 ']' 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:43.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:43.718 12:32:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.718 [2024-09-30 12:32:55.367130] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:43.718 [2024-09-30 12:32:55.367249] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.718 [2024-09-30 12:32:55.530360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.978 [2024-09-30 12:32:55.714131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.237 [2024-09-30 12:32:55.885540] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.237 [2024-09-30 12:32:55.885576] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.497 [2024-09-30 12:32:56.197376] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.497 [2024-09-30 12:32:56.197430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.497 [2024-09-30 12:32:56.197439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.497 [2024-09-30 12:32:56.197449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.497 [2024-09-30 12:32:56.197455] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:44.497 [2024-09-30 12:32:56.197463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.497 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.498 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.498 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.498 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.498 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.498 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.498 "name": "Existed_Raid", 00:15:44.498 "uuid": "39af5e48-1bbf-4602-b74b-4c0a2f09c44a", 00:15:44.498 "strip_size_kb": 64, 00:15:44.498 "state": "configuring", 00:15:44.498 "raid_level": "raid5f", 00:15:44.498 "superblock": true, 00:15:44.498 "num_base_bdevs": 3, 00:15:44.498 "num_base_bdevs_discovered": 0, 00:15:44.498 "num_base_bdevs_operational": 3, 00:15:44.498 "base_bdevs_list": [ 00:15:44.498 { 00:15:44.498 "name": "BaseBdev1", 00:15:44.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.498 "is_configured": false, 00:15:44.498 "data_offset": 0, 00:15:44.498 "data_size": 0 00:15:44.498 }, 00:15:44.498 { 00:15:44.498 "name": "BaseBdev2", 00:15:44.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.498 "is_configured": false, 00:15:44.498 "data_offset": 0, 00:15:44.498 "data_size": 0 00:15:44.498 }, 00:15:44.498 { 00:15:44.498 "name": "BaseBdev3", 00:15:44.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.498 "is_configured": false, 00:15:44.498 "data_offset": 0, 00:15:44.498 "data_size": 0 00:15:44.498 } 00:15:44.498 ] 00:15:44.498 }' 00:15:44.498 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.498 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.757 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:44.758 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.758 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.758 [2024-09-30 12:32:56.620554] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.758 [2024-09-30 12:32:56.620589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:44.758 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.758 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:44.758 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.758 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.758 [2024-09-30 12:32:56.632554] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.758 [2024-09-30 12:32:56.632595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.758 [2024-09-30 12:32:56.632602] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.758 [2024-09-30 12:32:56.632611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.758 [2024-09-30 12:32:56.632616] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:44.758 [2024-09-30 12:32:56.632624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:44.758 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.758 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:44.758 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.758 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.018 [2024-09-30 12:32:56.707895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.018 BaseBdev1 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.018 [ 00:15:45.018 { 00:15:45.018 "name": "BaseBdev1", 00:15:45.018 "aliases": [ 00:15:45.018 "ac2a64c9-d133-46cd-8c60-140747300fea" 00:15:45.018 ], 00:15:45.018 "product_name": "Malloc disk", 00:15:45.018 "block_size": 512, 00:15:45.018 "num_blocks": 65536, 00:15:45.018 "uuid": "ac2a64c9-d133-46cd-8c60-140747300fea", 00:15:45.018 "assigned_rate_limits": { 00:15:45.018 "rw_ios_per_sec": 0, 00:15:45.018 "rw_mbytes_per_sec": 0, 00:15:45.018 "r_mbytes_per_sec": 0, 00:15:45.018 "w_mbytes_per_sec": 0 00:15:45.018 }, 00:15:45.018 "claimed": true, 00:15:45.018 "claim_type": "exclusive_write", 00:15:45.018 "zoned": false, 00:15:45.018 "supported_io_types": { 00:15:45.018 "read": true, 00:15:45.018 "write": true, 00:15:45.018 "unmap": true, 00:15:45.018 "flush": true, 00:15:45.018 "reset": true, 00:15:45.018 "nvme_admin": false, 00:15:45.018 "nvme_io": false, 00:15:45.018 "nvme_io_md": false, 00:15:45.018 "write_zeroes": true, 00:15:45.018 "zcopy": true, 00:15:45.018 "get_zone_info": false, 00:15:45.018 "zone_management": false, 00:15:45.018 "zone_append": false, 00:15:45.018 "compare": false, 00:15:45.018 "compare_and_write": false, 00:15:45.018 "abort": true, 00:15:45.018 "seek_hole": false, 00:15:45.018 "seek_data": false, 00:15:45.018 "copy": true, 00:15:45.018 "nvme_iov_md": false 00:15:45.018 }, 00:15:45.018 "memory_domains": [ 00:15:45.018 { 00:15:45.018 "dma_device_id": "system", 00:15:45.018 "dma_device_type": 1 00:15:45.018 }, 00:15:45.018 { 00:15:45.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.018 "dma_device_type": 2 00:15:45.018 } 00:15:45.018 ], 00:15:45.018 "driver_specific": {} 00:15:45.018 } 00:15:45.018 ] 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.018 "name": "Existed_Raid", 00:15:45.018 "uuid": "213be255-3f6a-42fc-a08e-acd9c38e1339", 00:15:45.018 "strip_size_kb": 64, 00:15:45.018 "state": "configuring", 00:15:45.018 "raid_level": "raid5f", 00:15:45.018 "superblock": true, 00:15:45.018 "num_base_bdevs": 3, 00:15:45.018 "num_base_bdevs_discovered": 1, 00:15:45.018 "num_base_bdevs_operational": 3, 00:15:45.018 "base_bdevs_list": [ 00:15:45.018 { 00:15:45.018 "name": "BaseBdev1", 00:15:45.018 "uuid": "ac2a64c9-d133-46cd-8c60-140747300fea", 00:15:45.018 "is_configured": true, 00:15:45.018 "data_offset": 2048, 00:15:45.018 "data_size": 63488 00:15:45.018 }, 00:15:45.018 { 00:15:45.018 "name": "BaseBdev2", 00:15:45.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.018 "is_configured": false, 00:15:45.018 "data_offset": 0, 00:15:45.018 "data_size": 0 00:15:45.018 }, 00:15:45.018 { 00:15:45.018 "name": "BaseBdev3", 00:15:45.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.018 "is_configured": false, 00:15:45.018 "data_offset": 0, 00:15:45.018 "data_size": 0 00:15:45.018 } 00:15:45.018 ] 00:15:45.018 }' 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.018 12:32:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.588 [2024-09-30 12:32:57.195339] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.588 [2024-09-30 12:32:57.195384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.588 [2024-09-30 12:32:57.207363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.588 [2024-09-30 12:32:57.208970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.588 [2024-09-30 12:32:57.209011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.588 [2024-09-30 12:32:57.209020] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.588 [2024-09-30 12:32:57.209028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.588 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.589 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.589 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.589 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.589 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.589 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.589 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.589 "name": "Existed_Raid", 00:15:45.589 "uuid": "c2d42ef9-53cf-454d-a777-20fa54f7d974", 00:15:45.589 "strip_size_kb": 64, 00:15:45.589 "state": "configuring", 00:15:45.589 "raid_level": "raid5f", 00:15:45.589 "superblock": true, 00:15:45.589 "num_base_bdevs": 3, 00:15:45.589 "num_base_bdevs_discovered": 1, 00:15:45.589 "num_base_bdevs_operational": 3, 00:15:45.589 "base_bdevs_list": [ 00:15:45.589 { 00:15:45.589 "name": "BaseBdev1", 00:15:45.589 "uuid": "ac2a64c9-d133-46cd-8c60-140747300fea", 00:15:45.589 "is_configured": true, 00:15:45.589 "data_offset": 2048, 00:15:45.589 "data_size": 63488 00:15:45.589 }, 00:15:45.589 { 00:15:45.589 "name": "BaseBdev2", 00:15:45.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.589 "is_configured": false, 00:15:45.589 "data_offset": 0, 00:15:45.589 "data_size": 0 00:15:45.589 }, 00:15:45.589 { 00:15:45.589 "name": "BaseBdev3", 00:15:45.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.589 "is_configured": false, 00:15:45.589 "data_offset": 0, 00:15:45.589 "data_size": 0 00:15:45.589 } 00:15:45.589 ] 00:15:45.589 }' 00:15:45.589 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.589 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.848 [2024-09-30 12:32:57.629385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.848 BaseBdev2 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.848 [ 00:15:45.848 { 00:15:45.848 "name": "BaseBdev2", 00:15:45.848 "aliases": [ 00:15:45.848 "c59702b8-dc47-45dd-892b-d9d69d2f1eff" 00:15:45.848 ], 00:15:45.848 "product_name": "Malloc disk", 00:15:45.848 "block_size": 512, 00:15:45.848 "num_blocks": 65536, 00:15:45.848 "uuid": "c59702b8-dc47-45dd-892b-d9d69d2f1eff", 00:15:45.848 "assigned_rate_limits": { 00:15:45.848 "rw_ios_per_sec": 0, 00:15:45.848 "rw_mbytes_per_sec": 0, 00:15:45.848 "r_mbytes_per_sec": 0, 00:15:45.848 "w_mbytes_per_sec": 0 00:15:45.848 }, 00:15:45.848 "claimed": true, 00:15:45.848 "claim_type": "exclusive_write", 00:15:45.848 "zoned": false, 00:15:45.848 "supported_io_types": { 00:15:45.848 "read": true, 00:15:45.848 "write": true, 00:15:45.848 "unmap": true, 00:15:45.848 "flush": true, 00:15:45.848 "reset": true, 00:15:45.848 "nvme_admin": false, 00:15:45.848 "nvme_io": false, 00:15:45.848 "nvme_io_md": false, 00:15:45.848 "write_zeroes": true, 00:15:45.848 "zcopy": true, 00:15:45.848 "get_zone_info": false, 00:15:45.848 "zone_management": false, 00:15:45.848 "zone_append": false, 00:15:45.848 "compare": false, 00:15:45.848 "compare_and_write": false, 00:15:45.848 "abort": true, 00:15:45.848 "seek_hole": false, 00:15:45.848 "seek_data": false, 00:15:45.848 "copy": true, 00:15:45.848 "nvme_iov_md": false 00:15:45.848 }, 00:15:45.848 "memory_domains": [ 00:15:45.848 { 00:15:45.848 "dma_device_id": "system", 00:15:45.848 "dma_device_type": 1 00:15:45.848 }, 00:15:45.848 { 00:15:45.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.848 "dma_device_type": 2 00:15:45.848 } 00:15:45.848 ], 00:15:45.848 "driver_specific": {} 00:15:45.848 } 00:15:45.848 ] 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.848 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.848 "name": "Existed_Raid", 00:15:45.848 "uuid": "c2d42ef9-53cf-454d-a777-20fa54f7d974", 00:15:45.848 "strip_size_kb": 64, 00:15:45.848 "state": "configuring", 00:15:45.848 "raid_level": "raid5f", 00:15:45.849 "superblock": true, 00:15:45.849 "num_base_bdevs": 3, 00:15:45.849 "num_base_bdevs_discovered": 2, 00:15:45.849 "num_base_bdevs_operational": 3, 00:15:45.849 "base_bdevs_list": [ 00:15:45.849 { 00:15:45.849 "name": "BaseBdev1", 00:15:45.849 "uuid": "ac2a64c9-d133-46cd-8c60-140747300fea", 00:15:45.849 "is_configured": true, 00:15:45.849 "data_offset": 2048, 00:15:45.849 "data_size": 63488 00:15:45.849 }, 00:15:45.849 { 00:15:45.849 "name": "BaseBdev2", 00:15:45.849 "uuid": "c59702b8-dc47-45dd-892b-d9d69d2f1eff", 00:15:45.849 "is_configured": true, 00:15:45.849 "data_offset": 2048, 00:15:45.849 "data_size": 63488 00:15:45.849 }, 00:15:45.849 { 00:15:45.849 "name": "BaseBdev3", 00:15:45.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.849 "is_configured": false, 00:15:45.849 "data_offset": 0, 00:15:45.849 "data_size": 0 00:15:45.849 } 00:15:45.849 ] 00:15:45.849 }' 00:15:45.849 12:32:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.849 12:32:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.419 [2024-09-30 12:32:58.167173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.419 [2024-09-30 12:32:58.167424] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:46.419 [2024-09-30 12:32:58.167447] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:46.419 [2024-09-30 12:32:58.167680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:46.419 BaseBdev3 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.419 [2024-09-30 12:32:58.173215] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:46.419 [2024-09-30 12:32:58.173238] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:46.419 [2024-09-30 12:32:58.173387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.419 [ 00:15:46.419 { 00:15:46.419 "name": "BaseBdev3", 00:15:46.419 "aliases": [ 00:15:46.419 "55c94199-3e38-4f7e-8d08-0b15e9dba779" 00:15:46.419 ], 00:15:46.419 "product_name": "Malloc disk", 00:15:46.419 "block_size": 512, 00:15:46.419 "num_blocks": 65536, 00:15:46.419 "uuid": "55c94199-3e38-4f7e-8d08-0b15e9dba779", 00:15:46.419 "assigned_rate_limits": { 00:15:46.419 "rw_ios_per_sec": 0, 00:15:46.419 "rw_mbytes_per_sec": 0, 00:15:46.419 "r_mbytes_per_sec": 0, 00:15:46.419 "w_mbytes_per_sec": 0 00:15:46.419 }, 00:15:46.419 "claimed": true, 00:15:46.419 "claim_type": "exclusive_write", 00:15:46.419 "zoned": false, 00:15:46.419 "supported_io_types": { 00:15:46.419 "read": true, 00:15:46.419 "write": true, 00:15:46.419 "unmap": true, 00:15:46.419 "flush": true, 00:15:46.419 "reset": true, 00:15:46.419 "nvme_admin": false, 00:15:46.419 "nvme_io": false, 00:15:46.419 "nvme_io_md": false, 00:15:46.419 "write_zeroes": true, 00:15:46.419 "zcopy": true, 00:15:46.419 "get_zone_info": false, 00:15:46.419 "zone_management": false, 00:15:46.419 "zone_append": false, 00:15:46.419 "compare": false, 00:15:46.419 "compare_and_write": false, 00:15:46.419 "abort": true, 00:15:46.419 "seek_hole": false, 00:15:46.419 "seek_data": false, 00:15:46.419 "copy": true, 00:15:46.419 "nvme_iov_md": false 00:15:46.419 }, 00:15:46.419 "memory_domains": [ 00:15:46.419 { 00:15:46.419 "dma_device_id": "system", 00:15:46.419 "dma_device_type": 1 00:15:46.419 }, 00:15:46.419 { 00:15:46.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.419 "dma_device_type": 2 00:15:46.419 } 00:15:46.419 ], 00:15:46.419 "driver_specific": {} 00:15:46.419 } 00:15:46.419 ] 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.419 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.419 "name": "Existed_Raid", 00:15:46.419 "uuid": "c2d42ef9-53cf-454d-a777-20fa54f7d974", 00:15:46.419 "strip_size_kb": 64, 00:15:46.419 "state": "online", 00:15:46.419 "raid_level": "raid5f", 00:15:46.419 "superblock": true, 00:15:46.419 "num_base_bdevs": 3, 00:15:46.419 "num_base_bdevs_discovered": 3, 00:15:46.420 "num_base_bdevs_operational": 3, 00:15:46.420 "base_bdevs_list": [ 00:15:46.420 { 00:15:46.420 "name": "BaseBdev1", 00:15:46.420 "uuid": "ac2a64c9-d133-46cd-8c60-140747300fea", 00:15:46.420 "is_configured": true, 00:15:46.420 "data_offset": 2048, 00:15:46.420 "data_size": 63488 00:15:46.420 }, 00:15:46.420 { 00:15:46.420 "name": "BaseBdev2", 00:15:46.420 "uuid": "c59702b8-dc47-45dd-892b-d9d69d2f1eff", 00:15:46.420 "is_configured": true, 00:15:46.420 "data_offset": 2048, 00:15:46.420 "data_size": 63488 00:15:46.420 }, 00:15:46.420 { 00:15:46.420 "name": "BaseBdev3", 00:15:46.420 "uuid": "55c94199-3e38-4f7e-8d08-0b15e9dba779", 00:15:46.420 "is_configured": true, 00:15:46.420 "data_offset": 2048, 00:15:46.420 "data_size": 63488 00:15:46.420 } 00:15:46.420 ] 00:15:46.420 }' 00:15:46.420 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.420 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.990 [2024-09-30 12:32:58.674520] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:46.990 "name": "Existed_Raid", 00:15:46.990 "aliases": [ 00:15:46.990 "c2d42ef9-53cf-454d-a777-20fa54f7d974" 00:15:46.990 ], 00:15:46.990 "product_name": "Raid Volume", 00:15:46.990 "block_size": 512, 00:15:46.990 "num_blocks": 126976, 00:15:46.990 "uuid": "c2d42ef9-53cf-454d-a777-20fa54f7d974", 00:15:46.990 "assigned_rate_limits": { 00:15:46.990 "rw_ios_per_sec": 0, 00:15:46.990 "rw_mbytes_per_sec": 0, 00:15:46.990 "r_mbytes_per_sec": 0, 00:15:46.990 "w_mbytes_per_sec": 0 00:15:46.990 }, 00:15:46.990 "claimed": false, 00:15:46.990 "zoned": false, 00:15:46.990 "supported_io_types": { 00:15:46.990 "read": true, 00:15:46.990 "write": true, 00:15:46.990 "unmap": false, 00:15:46.990 "flush": false, 00:15:46.990 "reset": true, 00:15:46.990 "nvme_admin": false, 00:15:46.990 "nvme_io": false, 00:15:46.990 "nvme_io_md": false, 00:15:46.990 "write_zeroes": true, 00:15:46.990 "zcopy": false, 00:15:46.990 "get_zone_info": false, 00:15:46.990 "zone_management": false, 00:15:46.990 "zone_append": false, 00:15:46.990 "compare": false, 00:15:46.990 "compare_and_write": false, 00:15:46.990 "abort": false, 00:15:46.990 "seek_hole": false, 00:15:46.990 "seek_data": false, 00:15:46.990 "copy": false, 00:15:46.990 "nvme_iov_md": false 00:15:46.990 }, 00:15:46.990 "driver_specific": { 00:15:46.990 "raid": { 00:15:46.990 "uuid": "c2d42ef9-53cf-454d-a777-20fa54f7d974", 00:15:46.990 "strip_size_kb": 64, 00:15:46.990 "state": "online", 00:15:46.990 "raid_level": "raid5f", 00:15:46.990 "superblock": true, 00:15:46.990 "num_base_bdevs": 3, 00:15:46.990 "num_base_bdevs_discovered": 3, 00:15:46.990 "num_base_bdevs_operational": 3, 00:15:46.990 "base_bdevs_list": [ 00:15:46.990 { 00:15:46.990 "name": "BaseBdev1", 00:15:46.990 "uuid": "ac2a64c9-d133-46cd-8c60-140747300fea", 00:15:46.990 "is_configured": true, 00:15:46.990 "data_offset": 2048, 00:15:46.990 "data_size": 63488 00:15:46.990 }, 00:15:46.990 { 00:15:46.990 "name": "BaseBdev2", 00:15:46.990 "uuid": "c59702b8-dc47-45dd-892b-d9d69d2f1eff", 00:15:46.990 "is_configured": true, 00:15:46.990 "data_offset": 2048, 00:15:46.990 "data_size": 63488 00:15:46.990 }, 00:15:46.990 { 00:15:46.990 "name": "BaseBdev3", 00:15:46.990 "uuid": "55c94199-3e38-4f7e-8d08-0b15e9dba779", 00:15:46.990 "is_configured": true, 00:15:46.990 "data_offset": 2048, 00:15:46.990 "data_size": 63488 00:15:46.990 } 00:15:46.990 ] 00:15:46.990 } 00:15:46.990 } 00:15:46.990 }' 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:46.990 BaseBdev2 00:15:46.990 BaseBdev3' 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.990 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.250 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.250 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.250 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.250 12:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:47.250 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.250 12:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.250 [2024-09-30 12:32:58.917933] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.250 "name": "Existed_Raid", 00:15:47.250 "uuid": "c2d42ef9-53cf-454d-a777-20fa54f7d974", 00:15:47.250 "strip_size_kb": 64, 00:15:47.250 "state": "online", 00:15:47.250 "raid_level": "raid5f", 00:15:47.250 "superblock": true, 00:15:47.250 "num_base_bdevs": 3, 00:15:47.250 "num_base_bdevs_discovered": 2, 00:15:47.250 "num_base_bdevs_operational": 2, 00:15:47.250 "base_bdevs_list": [ 00:15:47.250 { 00:15:47.250 "name": null, 00:15:47.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.250 "is_configured": false, 00:15:47.250 "data_offset": 0, 00:15:47.250 "data_size": 63488 00:15:47.250 }, 00:15:47.250 { 00:15:47.250 "name": "BaseBdev2", 00:15:47.250 "uuid": "c59702b8-dc47-45dd-892b-d9d69d2f1eff", 00:15:47.250 "is_configured": true, 00:15:47.250 "data_offset": 2048, 00:15:47.250 "data_size": 63488 00:15:47.250 }, 00:15:47.250 { 00:15:47.250 "name": "BaseBdev3", 00:15:47.250 "uuid": "55c94199-3e38-4f7e-8d08-0b15e9dba779", 00:15:47.250 "is_configured": true, 00:15:47.250 "data_offset": 2048, 00:15:47.250 "data_size": 63488 00:15:47.250 } 00:15:47.250 ] 00:15:47.250 }' 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.250 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.509 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:47.509 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.509 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.509 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.509 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.509 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.769 [2024-09-30 12:32:59.446623] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.769 [2024-09-30 12:32:59.446786] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.769 [2024-09-30 12:32:59.535236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.769 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.769 [2024-09-30 12:32:59.595133] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:47.769 [2024-09-30 12:32:59.595183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.030 BaseBdev2 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.030 [ 00:15:48.030 { 00:15:48.030 "name": "BaseBdev2", 00:15:48.030 "aliases": [ 00:15:48.030 "3e0977aa-53cf-4136-ae3f-b4f607b7fd62" 00:15:48.030 ], 00:15:48.030 "product_name": "Malloc disk", 00:15:48.030 "block_size": 512, 00:15:48.030 "num_blocks": 65536, 00:15:48.030 "uuid": "3e0977aa-53cf-4136-ae3f-b4f607b7fd62", 00:15:48.030 "assigned_rate_limits": { 00:15:48.030 "rw_ios_per_sec": 0, 00:15:48.030 "rw_mbytes_per_sec": 0, 00:15:48.030 "r_mbytes_per_sec": 0, 00:15:48.030 "w_mbytes_per_sec": 0 00:15:48.030 }, 00:15:48.030 "claimed": false, 00:15:48.030 "zoned": false, 00:15:48.030 "supported_io_types": { 00:15:48.030 "read": true, 00:15:48.030 "write": true, 00:15:48.030 "unmap": true, 00:15:48.030 "flush": true, 00:15:48.030 "reset": true, 00:15:48.030 "nvme_admin": false, 00:15:48.030 "nvme_io": false, 00:15:48.030 "nvme_io_md": false, 00:15:48.030 "write_zeroes": true, 00:15:48.030 "zcopy": true, 00:15:48.030 "get_zone_info": false, 00:15:48.030 "zone_management": false, 00:15:48.030 "zone_append": false, 00:15:48.030 "compare": false, 00:15:48.030 "compare_and_write": false, 00:15:48.030 "abort": true, 00:15:48.030 "seek_hole": false, 00:15:48.030 "seek_data": false, 00:15:48.030 "copy": true, 00:15:48.030 "nvme_iov_md": false 00:15:48.030 }, 00:15:48.030 "memory_domains": [ 00:15:48.030 { 00:15:48.030 "dma_device_id": "system", 00:15:48.030 "dma_device_type": 1 00:15:48.030 }, 00:15:48.030 { 00:15:48.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.030 "dma_device_type": 2 00:15:48.030 } 00:15:48.030 ], 00:15:48.030 "driver_specific": {} 00:15:48.030 } 00:15:48.030 ] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.030 BaseBdev3 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.030 [ 00:15:48.030 { 00:15:48.030 "name": "BaseBdev3", 00:15:48.030 "aliases": [ 00:15:48.030 "9aa23f57-66d4-4c5d-a143-08e5ab3186d3" 00:15:48.030 ], 00:15:48.030 "product_name": "Malloc disk", 00:15:48.030 "block_size": 512, 00:15:48.030 "num_blocks": 65536, 00:15:48.030 "uuid": "9aa23f57-66d4-4c5d-a143-08e5ab3186d3", 00:15:48.030 "assigned_rate_limits": { 00:15:48.030 "rw_ios_per_sec": 0, 00:15:48.030 "rw_mbytes_per_sec": 0, 00:15:48.030 "r_mbytes_per_sec": 0, 00:15:48.030 "w_mbytes_per_sec": 0 00:15:48.030 }, 00:15:48.030 "claimed": false, 00:15:48.030 "zoned": false, 00:15:48.030 "supported_io_types": { 00:15:48.030 "read": true, 00:15:48.030 "write": true, 00:15:48.030 "unmap": true, 00:15:48.030 "flush": true, 00:15:48.030 "reset": true, 00:15:48.030 "nvme_admin": false, 00:15:48.030 "nvme_io": false, 00:15:48.030 "nvme_io_md": false, 00:15:48.030 "write_zeroes": true, 00:15:48.030 "zcopy": true, 00:15:48.030 "get_zone_info": false, 00:15:48.030 "zone_management": false, 00:15:48.030 "zone_append": false, 00:15:48.030 "compare": false, 00:15:48.030 "compare_and_write": false, 00:15:48.030 "abort": true, 00:15:48.030 "seek_hole": false, 00:15:48.030 "seek_data": false, 00:15:48.030 "copy": true, 00:15:48.030 "nvme_iov_md": false 00:15:48.030 }, 00:15:48.030 "memory_domains": [ 00:15:48.030 { 00:15:48.030 "dma_device_id": "system", 00:15:48.030 "dma_device_type": 1 00:15:48.030 }, 00:15:48.030 { 00:15:48.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.030 "dma_device_type": 2 00:15:48.030 } 00:15:48.030 ], 00:15:48.030 "driver_specific": {} 00:15:48.030 } 00:15:48.030 ] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.030 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.030 [2024-09-30 12:32:59.896815] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.031 [2024-09-30 12:32:59.896865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.031 [2024-09-30 12:32:59.896884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.031 [2024-09-30 12:32:59.898516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.031 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.291 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.291 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.291 "name": "Existed_Raid", 00:15:48.291 "uuid": "4a378365-f127-4542-91df-073addc47e2c", 00:15:48.291 "strip_size_kb": 64, 00:15:48.291 "state": "configuring", 00:15:48.291 "raid_level": "raid5f", 00:15:48.291 "superblock": true, 00:15:48.291 "num_base_bdevs": 3, 00:15:48.291 "num_base_bdevs_discovered": 2, 00:15:48.291 "num_base_bdevs_operational": 3, 00:15:48.291 "base_bdevs_list": [ 00:15:48.291 { 00:15:48.291 "name": "BaseBdev1", 00:15:48.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.291 "is_configured": false, 00:15:48.291 "data_offset": 0, 00:15:48.291 "data_size": 0 00:15:48.291 }, 00:15:48.291 { 00:15:48.291 "name": "BaseBdev2", 00:15:48.291 "uuid": "3e0977aa-53cf-4136-ae3f-b4f607b7fd62", 00:15:48.291 "is_configured": true, 00:15:48.291 "data_offset": 2048, 00:15:48.291 "data_size": 63488 00:15:48.291 }, 00:15:48.291 { 00:15:48.291 "name": "BaseBdev3", 00:15:48.291 "uuid": "9aa23f57-66d4-4c5d-a143-08e5ab3186d3", 00:15:48.291 "is_configured": true, 00:15:48.291 "data_offset": 2048, 00:15:48.291 "data_size": 63488 00:15:48.291 } 00:15:48.291 ] 00:15:48.291 }' 00:15:48.291 12:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.291 12:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.555 [2024-09-30 12:33:00.344018] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.555 "name": "Existed_Raid", 00:15:48.555 "uuid": "4a378365-f127-4542-91df-073addc47e2c", 00:15:48.555 "strip_size_kb": 64, 00:15:48.555 "state": "configuring", 00:15:48.555 "raid_level": "raid5f", 00:15:48.555 "superblock": true, 00:15:48.555 "num_base_bdevs": 3, 00:15:48.555 "num_base_bdevs_discovered": 1, 00:15:48.555 "num_base_bdevs_operational": 3, 00:15:48.555 "base_bdevs_list": [ 00:15:48.555 { 00:15:48.555 "name": "BaseBdev1", 00:15:48.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.555 "is_configured": false, 00:15:48.555 "data_offset": 0, 00:15:48.555 "data_size": 0 00:15:48.555 }, 00:15:48.555 { 00:15:48.555 "name": null, 00:15:48.555 "uuid": "3e0977aa-53cf-4136-ae3f-b4f607b7fd62", 00:15:48.555 "is_configured": false, 00:15:48.555 "data_offset": 0, 00:15:48.555 "data_size": 63488 00:15:48.555 }, 00:15:48.555 { 00:15:48.555 "name": "BaseBdev3", 00:15:48.555 "uuid": "9aa23f57-66d4-4c5d-a143-08e5ab3186d3", 00:15:48.555 "is_configured": true, 00:15:48.555 "data_offset": 2048, 00:15:48.555 "data_size": 63488 00:15:48.555 } 00:15:48.555 ] 00:15:48.555 }' 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.555 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.124 [2024-09-30 12:33:00.848145] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.124 BaseBdev1 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.124 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.124 [ 00:15:49.124 { 00:15:49.124 "name": "BaseBdev1", 00:15:49.124 "aliases": [ 00:15:49.124 "6d59c85d-4361-4495-94db-330615b9d4bb" 00:15:49.124 ], 00:15:49.124 "product_name": "Malloc disk", 00:15:49.124 "block_size": 512, 00:15:49.124 "num_blocks": 65536, 00:15:49.124 "uuid": "6d59c85d-4361-4495-94db-330615b9d4bb", 00:15:49.124 "assigned_rate_limits": { 00:15:49.124 "rw_ios_per_sec": 0, 00:15:49.124 "rw_mbytes_per_sec": 0, 00:15:49.124 "r_mbytes_per_sec": 0, 00:15:49.124 "w_mbytes_per_sec": 0 00:15:49.124 }, 00:15:49.124 "claimed": true, 00:15:49.124 "claim_type": "exclusive_write", 00:15:49.124 "zoned": false, 00:15:49.124 "supported_io_types": { 00:15:49.124 "read": true, 00:15:49.124 "write": true, 00:15:49.125 "unmap": true, 00:15:49.125 "flush": true, 00:15:49.125 "reset": true, 00:15:49.125 "nvme_admin": false, 00:15:49.125 "nvme_io": false, 00:15:49.125 "nvme_io_md": false, 00:15:49.125 "write_zeroes": true, 00:15:49.125 "zcopy": true, 00:15:49.125 "get_zone_info": false, 00:15:49.125 "zone_management": false, 00:15:49.125 "zone_append": false, 00:15:49.125 "compare": false, 00:15:49.125 "compare_and_write": false, 00:15:49.125 "abort": true, 00:15:49.125 "seek_hole": false, 00:15:49.125 "seek_data": false, 00:15:49.125 "copy": true, 00:15:49.125 "nvme_iov_md": false 00:15:49.125 }, 00:15:49.125 "memory_domains": [ 00:15:49.125 { 00:15:49.125 "dma_device_id": "system", 00:15:49.125 "dma_device_type": 1 00:15:49.125 }, 00:15:49.125 { 00:15:49.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.125 "dma_device_type": 2 00:15:49.125 } 00:15:49.125 ], 00:15:49.125 "driver_specific": {} 00:15:49.125 } 00:15:49.125 ] 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.125 "name": "Existed_Raid", 00:15:49.125 "uuid": "4a378365-f127-4542-91df-073addc47e2c", 00:15:49.125 "strip_size_kb": 64, 00:15:49.125 "state": "configuring", 00:15:49.125 "raid_level": "raid5f", 00:15:49.125 "superblock": true, 00:15:49.125 "num_base_bdevs": 3, 00:15:49.125 "num_base_bdevs_discovered": 2, 00:15:49.125 "num_base_bdevs_operational": 3, 00:15:49.125 "base_bdevs_list": [ 00:15:49.125 { 00:15:49.125 "name": "BaseBdev1", 00:15:49.125 "uuid": "6d59c85d-4361-4495-94db-330615b9d4bb", 00:15:49.125 "is_configured": true, 00:15:49.125 "data_offset": 2048, 00:15:49.125 "data_size": 63488 00:15:49.125 }, 00:15:49.125 { 00:15:49.125 "name": null, 00:15:49.125 "uuid": "3e0977aa-53cf-4136-ae3f-b4f607b7fd62", 00:15:49.125 "is_configured": false, 00:15:49.125 "data_offset": 0, 00:15:49.125 "data_size": 63488 00:15:49.125 }, 00:15:49.125 { 00:15:49.125 "name": "BaseBdev3", 00:15:49.125 "uuid": "9aa23f57-66d4-4c5d-a143-08e5ab3186d3", 00:15:49.125 "is_configured": true, 00:15:49.125 "data_offset": 2048, 00:15:49.125 "data_size": 63488 00:15:49.125 } 00:15:49.125 ] 00:15:49.125 }' 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.125 12:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.694 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.695 [2024-09-30 12:33:01.391484] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.695 "name": "Existed_Raid", 00:15:49.695 "uuid": "4a378365-f127-4542-91df-073addc47e2c", 00:15:49.695 "strip_size_kb": 64, 00:15:49.695 "state": "configuring", 00:15:49.695 "raid_level": "raid5f", 00:15:49.695 "superblock": true, 00:15:49.695 "num_base_bdevs": 3, 00:15:49.695 "num_base_bdevs_discovered": 1, 00:15:49.695 "num_base_bdevs_operational": 3, 00:15:49.695 "base_bdevs_list": [ 00:15:49.695 { 00:15:49.695 "name": "BaseBdev1", 00:15:49.695 "uuid": "6d59c85d-4361-4495-94db-330615b9d4bb", 00:15:49.695 "is_configured": true, 00:15:49.695 "data_offset": 2048, 00:15:49.695 "data_size": 63488 00:15:49.695 }, 00:15:49.695 { 00:15:49.695 "name": null, 00:15:49.695 "uuid": "3e0977aa-53cf-4136-ae3f-b4f607b7fd62", 00:15:49.695 "is_configured": false, 00:15:49.695 "data_offset": 0, 00:15:49.695 "data_size": 63488 00:15:49.695 }, 00:15:49.695 { 00:15:49.695 "name": null, 00:15:49.695 "uuid": "9aa23f57-66d4-4c5d-a143-08e5ab3186d3", 00:15:49.695 "is_configured": false, 00:15:49.695 "data_offset": 0, 00:15:49.695 "data_size": 63488 00:15:49.695 } 00:15:49.695 ] 00:15:49.695 }' 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.695 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.264 [2024-09-30 12:33:01.914533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.264 "name": "Existed_Raid", 00:15:50.264 "uuid": "4a378365-f127-4542-91df-073addc47e2c", 00:15:50.264 "strip_size_kb": 64, 00:15:50.264 "state": "configuring", 00:15:50.264 "raid_level": "raid5f", 00:15:50.264 "superblock": true, 00:15:50.264 "num_base_bdevs": 3, 00:15:50.264 "num_base_bdevs_discovered": 2, 00:15:50.264 "num_base_bdevs_operational": 3, 00:15:50.264 "base_bdevs_list": [ 00:15:50.264 { 00:15:50.264 "name": "BaseBdev1", 00:15:50.264 "uuid": "6d59c85d-4361-4495-94db-330615b9d4bb", 00:15:50.264 "is_configured": true, 00:15:50.264 "data_offset": 2048, 00:15:50.264 "data_size": 63488 00:15:50.264 }, 00:15:50.264 { 00:15:50.264 "name": null, 00:15:50.264 "uuid": "3e0977aa-53cf-4136-ae3f-b4f607b7fd62", 00:15:50.264 "is_configured": false, 00:15:50.264 "data_offset": 0, 00:15:50.264 "data_size": 63488 00:15:50.264 }, 00:15:50.264 { 00:15:50.264 "name": "BaseBdev3", 00:15:50.264 "uuid": "9aa23f57-66d4-4c5d-a143-08e5ab3186d3", 00:15:50.264 "is_configured": true, 00:15:50.264 "data_offset": 2048, 00:15:50.264 "data_size": 63488 00:15:50.264 } 00:15:50.264 ] 00:15:50.264 }' 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.264 12:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.523 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.523 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:50.523 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.523 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:50.523 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:50.523 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.523 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.784 [2024-09-30 12:33:02.421683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.784 "name": "Existed_Raid", 00:15:50.784 "uuid": "4a378365-f127-4542-91df-073addc47e2c", 00:15:50.784 "strip_size_kb": 64, 00:15:50.784 "state": "configuring", 00:15:50.784 "raid_level": "raid5f", 00:15:50.784 "superblock": true, 00:15:50.784 "num_base_bdevs": 3, 00:15:50.784 "num_base_bdevs_discovered": 1, 00:15:50.784 "num_base_bdevs_operational": 3, 00:15:50.784 "base_bdevs_list": [ 00:15:50.784 { 00:15:50.784 "name": null, 00:15:50.784 "uuid": "6d59c85d-4361-4495-94db-330615b9d4bb", 00:15:50.784 "is_configured": false, 00:15:50.784 "data_offset": 0, 00:15:50.784 "data_size": 63488 00:15:50.784 }, 00:15:50.784 { 00:15:50.784 "name": null, 00:15:50.784 "uuid": "3e0977aa-53cf-4136-ae3f-b4f607b7fd62", 00:15:50.784 "is_configured": false, 00:15:50.784 "data_offset": 0, 00:15:50.784 "data_size": 63488 00:15:50.784 }, 00:15:50.784 { 00:15:50.784 "name": "BaseBdev3", 00:15:50.784 "uuid": "9aa23f57-66d4-4c5d-a143-08e5ab3186d3", 00:15:50.784 "is_configured": true, 00:15:50.784 "data_offset": 2048, 00:15:50.784 "data_size": 63488 00:15:50.784 } 00:15:50.784 ] 00:15:50.784 }' 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.784 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.353 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.353 12:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:51.353 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.353 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.353 12:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.353 [2024-09-30 12:33:03.012772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.353 "name": "Existed_Raid", 00:15:51.353 "uuid": "4a378365-f127-4542-91df-073addc47e2c", 00:15:51.353 "strip_size_kb": 64, 00:15:51.353 "state": "configuring", 00:15:51.353 "raid_level": "raid5f", 00:15:51.353 "superblock": true, 00:15:51.353 "num_base_bdevs": 3, 00:15:51.353 "num_base_bdevs_discovered": 2, 00:15:51.353 "num_base_bdevs_operational": 3, 00:15:51.353 "base_bdevs_list": [ 00:15:51.353 { 00:15:51.353 "name": null, 00:15:51.353 "uuid": "6d59c85d-4361-4495-94db-330615b9d4bb", 00:15:51.353 "is_configured": false, 00:15:51.353 "data_offset": 0, 00:15:51.353 "data_size": 63488 00:15:51.353 }, 00:15:51.353 { 00:15:51.353 "name": "BaseBdev2", 00:15:51.353 "uuid": "3e0977aa-53cf-4136-ae3f-b4f607b7fd62", 00:15:51.353 "is_configured": true, 00:15:51.353 "data_offset": 2048, 00:15:51.353 "data_size": 63488 00:15:51.353 }, 00:15:51.353 { 00:15:51.353 "name": "BaseBdev3", 00:15:51.353 "uuid": "9aa23f57-66d4-4c5d-a143-08e5ab3186d3", 00:15:51.353 "is_configured": true, 00:15:51.353 "data_offset": 2048, 00:15:51.353 "data_size": 63488 00:15:51.353 } 00:15:51.353 ] 00:15:51.353 }' 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.353 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.613 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.613 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.613 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.613 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:51.613 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.613 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:51.613 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.613 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:51.613 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.613 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.613 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.872 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6d59c85d-4361-4495-94db-330615b9d4bb 00:15:51.872 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.872 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.872 [2024-09-30 12:33:03.565927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:51.872 [2024-09-30 12:33:03.566198] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:51.873 [2024-09-30 12:33:03.566237] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:51.873 [2024-09-30 12:33:03.566503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:51.873 NewBaseBdev 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.873 [2024-09-30 12:33:03.571794] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:51.873 [2024-09-30 12:33:03.571817] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:51.873 [2024-09-30 12:33:03.571964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.873 [ 00:15:51.873 { 00:15:51.873 "name": "NewBaseBdev", 00:15:51.873 "aliases": [ 00:15:51.873 "6d59c85d-4361-4495-94db-330615b9d4bb" 00:15:51.873 ], 00:15:51.873 "product_name": "Malloc disk", 00:15:51.873 "block_size": 512, 00:15:51.873 "num_blocks": 65536, 00:15:51.873 "uuid": "6d59c85d-4361-4495-94db-330615b9d4bb", 00:15:51.873 "assigned_rate_limits": { 00:15:51.873 "rw_ios_per_sec": 0, 00:15:51.873 "rw_mbytes_per_sec": 0, 00:15:51.873 "r_mbytes_per_sec": 0, 00:15:51.873 "w_mbytes_per_sec": 0 00:15:51.873 }, 00:15:51.873 "claimed": true, 00:15:51.873 "claim_type": "exclusive_write", 00:15:51.873 "zoned": false, 00:15:51.873 "supported_io_types": { 00:15:51.873 "read": true, 00:15:51.873 "write": true, 00:15:51.873 "unmap": true, 00:15:51.873 "flush": true, 00:15:51.873 "reset": true, 00:15:51.873 "nvme_admin": false, 00:15:51.873 "nvme_io": false, 00:15:51.873 "nvme_io_md": false, 00:15:51.873 "write_zeroes": true, 00:15:51.873 "zcopy": true, 00:15:51.873 "get_zone_info": false, 00:15:51.873 "zone_management": false, 00:15:51.873 "zone_append": false, 00:15:51.873 "compare": false, 00:15:51.873 "compare_and_write": false, 00:15:51.873 "abort": true, 00:15:51.873 "seek_hole": false, 00:15:51.873 "seek_data": false, 00:15:51.873 "copy": true, 00:15:51.873 "nvme_iov_md": false 00:15:51.873 }, 00:15:51.873 "memory_domains": [ 00:15:51.873 { 00:15:51.873 "dma_device_id": "system", 00:15:51.873 "dma_device_type": 1 00:15:51.873 }, 00:15:51.873 { 00:15:51.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.873 "dma_device_type": 2 00:15:51.873 } 00:15:51.873 ], 00:15:51.873 "driver_specific": {} 00:15:51.873 } 00:15:51.873 ] 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.873 "name": "Existed_Raid", 00:15:51.873 "uuid": "4a378365-f127-4542-91df-073addc47e2c", 00:15:51.873 "strip_size_kb": 64, 00:15:51.873 "state": "online", 00:15:51.873 "raid_level": "raid5f", 00:15:51.873 "superblock": true, 00:15:51.873 "num_base_bdevs": 3, 00:15:51.873 "num_base_bdevs_discovered": 3, 00:15:51.873 "num_base_bdevs_operational": 3, 00:15:51.873 "base_bdevs_list": [ 00:15:51.873 { 00:15:51.873 "name": "NewBaseBdev", 00:15:51.873 "uuid": "6d59c85d-4361-4495-94db-330615b9d4bb", 00:15:51.873 "is_configured": true, 00:15:51.873 "data_offset": 2048, 00:15:51.873 "data_size": 63488 00:15:51.873 }, 00:15:51.873 { 00:15:51.873 "name": "BaseBdev2", 00:15:51.873 "uuid": "3e0977aa-53cf-4136-ae3f-b4f607b7fd62", 00:15:51.873 "is_configured": true, 00:15:51.873 "data_offset": 2048, 00:15:51.873 "data_size": 63488 00:15:51.873 }, 00:15:51.873 { 00:15:51.873 "name": "BaseBdev3", 00:15:51.873 "uuid": "9aa23f57-66d4-4c5d-a143-08e5ab3186d3", 00:15:51.873 "is_configured": true, 00:15:51.873 "data_offset": 2048, 00:15:51.873 "data_size": 63488 00:15:51.873 } 00:15:51.873 ] 00:15:51.873 }' 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.873 12:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.443 [2024-09-30 12:33:04.045083] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.443 "name": "Existed_Raid", 00:15:52.443 "aliases": [ 00:15:52.443 "4a378365-f127-4542-91df-073addc47e2c" 00:15:52.443 ], 00:15:52.443 "product_name": "Raid Volume", 00:15:52.443 "block_size": 512, 00:15:52.443 "num_blocks": 126976, 00:15:52.443 "uuid": "4a378365-f127-4542-91df-073addc47e2c", 00:15:52.443 "assigned_rate_limits": { 00:15:52.443 "rw_ios_per_sec": 0, 00:15:52.443 "rw_mbytes_per_sec": 0, 00:15:52.443 "r_mbytes_per_sec": 0, 00:15:52.443 "w_mbytes_per_sec": 0 00:15:52.443 }, 00:15:52.443 "claimed": false, 00:15:52.443 "zoned": false, 00:15:52.443 "supported_io_types": { 00:15:52.443 "read": true, 00:15:52.443 "write": true, 00:15:52.443 "unmap": false, 00:15:52.443 "flush": false, 00:15:52.443 "reset": true, 00:15:52.443 "nvme_admin": false, 00:15:52.443 "nvme_io": false, 00:15:52.443 "nvme_io_md": false, 00:15:52.443 "write_zeroes": true, 00:15:52.443 "zcopy": false, 00:15:52.443 "get_zone_info": false, 00:15:52.443 "zone_management": false, 00:15:52.443 "zone_append": false, 00:15:52.443 "compare": false, 00:15:52.443 "compare_and_write": false, 00:15:52.443 "abort": false, 00:15:52.443 "seek_hole": false, 00:15:52.443 "seek_data": false, 00:15:52.443 "copy": false, 00:15:52.443 "nvme_iov_md": false 00:15:52.443 }, 00:15:52.443 "driver_specific": { 00:15:52.443 "raid": { 00:15:52.443 "uuid": "4a378365-f127-4542-91df-073addc47e2c", 00:15:52.443 "strip_size_kb": 64, 00:15:52.443 "state": "online", 00:15:52.443 "raid_level": "raid5f", 00:15:52.443 "superblock": true, 00:15:52.443 "num_base_bdevs": 3, 00:15:52.443 "num_base_bdevs_discovered": 3, 00:15:52.443 "num_base_bdevs_operational": 3, 00:15:52.443 "base_bdevs_list": [ 00:15:52.443 { 00:15:52.443 "name": "NewBaseBdev", 00:15:52.443 "uuid": "6d59c85d-4361-4495-94db-330615b9d4bb", 00:15:52.443 "is_configured": true, 00:15:52.443 "data_offset": 2048, 00:15:52.443 "data_size": 63488 00:15:52.443 }, 00:15:52.443 { 00:15:52.443 "name": "BaseBdev2", 00:15:52.443 "uuid": "3e0977aa-53cf-4136-ae3f-b4f607b7fd62", 00:15:52.443 "is_configured": true, 00:15:52.443 "data_offset": 2048, 00:15:52.443 "data_size": 63488 00:15:52.443 }, 00:15:52.443 { 00:15:52.443 "name": "BaseBdev3", 00:15:52.443 "uuid": "9aa23f57-66d4-4c5d-a143-08e5ab3186d3", 00:15:52.443 "is_configured": true, 00:15:52.443 "data_offset": 2048, 00:15:52.443 "data_size": 63488 00:15:52.443 } 00:15:52.443 ] 00:15:52.443 } 00:15:52.443 } 00:15:52.443 }' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:52.443 BaseBdev2 00:15:52.443 BaseBdev3' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.443 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.443 [2024-09-30 12:33:04.324412] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.443 [2024-09-30 12:33:04.324437] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.443 [2024-09-30 12:33:04.324508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.444 [2024-09-30 12:33:04.324769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.444 [2024-09-30 12:33:04.324782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:52.444 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.444 12:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80358 00:15:52.444 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80358 ']' 00:15:52.444 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80358 00:15:52.444 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:52.703 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.703 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80358 00:15:52.703 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.703 killing process with pid 80358 00:15:52.703 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.703 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80358' 00:15:52.703 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80358 00:15:52.703 [2024-09-30 12:33:04.374784] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.703 12:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80358 00:15:52.963 [2024-09-30 12:33:04.655796] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.345 12:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:54.345 00:15:54.345 real 0m10.572s 00:15:54.345 user 0m16.713s 00:15:54.345 sys 0m1.984s 00:15:54.345 12:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.345 12:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.345 ************************************ 00:15:54.345 END TEST raid5f_state_function_test_sb 00:15:54.346 ************************************ 00:15:54.346 12:33:05 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:54.346 12:33:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:54.346 12:33:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.346 12:33:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.346 ************************************ 00:15:54.346 START TEST raid5f_superblock_test 00:15:54.346 ************************************ 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80984 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80984 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 80984 ']' 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.346 12:33:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.346 [2024-09-30 12:33:06.011245] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:54.346 [2024-09-30 12:33:06.011428] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80984 ] 00:15:54.346 [2024-09-30 12:33:06.173017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.606 [2024-09-30 12:33:06.356313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.865 [2024-09-30 12:33:06.524098] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.865 [2024-09-30 12:33:06.524229] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.125 malloc1 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.125 [2024-09-30 12:33:06.873522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.125 [2024-09-30 12:33:06.873656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.125 [2024-09-30 12:33:06.873682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:55.125 [2024-09-30 12:33:06.873693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.125 [2024-09-30 12:33:06.875586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.125 [2024-09-30 12:33:06.875627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.125 pt1 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.125 malloc2 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.125 [2024-09-30 12:33:06.939351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.125 [2024-09-30 12:33:06.939479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.125 [2024-09-30 12:33:06.939515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:55.125 [2024-09-30 12:33:06.939541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.125 [2024-09-30 12:33:06.941432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.125 [2024-09-30 12:33:06.941499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.125 pt2 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.125 malloc3 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.125 12:33:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.125 [2024-09-30 12:33:06.998412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:55.125 [2024-09-30 12:33:06.998510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.125 [2024-09-30 12:33:06.998545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:55.125 [2024-09-30 12:33:06.998571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.125 [2024-09-30 12:33:07.000440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.125 [2024-09-30 12:33:07.000509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:55.125 pt3 00:15:55.125 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.126 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:55.126 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:55.126 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:55.126 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.126 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.126 [2024-09-30 12:33:07.010463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.126 [2024-09-30 12:33:07.012141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.126 [2024-09-30 12:33:07.012238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.126 [2024-09-30 12:33:07.012408] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:55.126 [2024-09-30 12:33:07.012467] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:55.126 [2024-09-30 12:33:07.012698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:55.126 [2024-09-30 12:33:07.018297] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:55.126 [2024-09-30 12:33:07.018348] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:55.126 [2024-09-30 12:33:07.018543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.126 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.384 "name": "raid_bdev1", 00:15:55.384 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:55.384 "strip_size_kb": 64, 00:15:55.384 "state": "online", 00:15:55.384 "raid_level": "raid5f", 00:15:55.384 "superblock": true, 00:15:55.384 "num_base_bdevs": 3, 00:15:55.384 "num_base_bdevs_discovered": 3, 00:15:55.384 "num_base_bdevs_operational": 3, 00:15:55.384 "base_bdevs_list": [ 00:15:55.384 { 00:15:55.384 "name": "pt1", 00:15:55.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.384 "is_configured": true, 00:15:55.384 "data_offset": 2048, 00:15:55.384 "data_size": 63488 00:15:55.384 }, 00:15:55.384 { 00:15:55.384 "name": "pt2", 00:15:55.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.384 "is_configured": true, 00:15:55.384 "data_offset": 2048, 00:15:55.384 "data_size": 63488 00:15:55.384 }, 00:15:55.384 { 00:15:55.384 "name": "pt3", 00:15:55.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.384 "is_configured": true, 00:15:55.384 "data_offset": 2048, 00:15:55.384 "data_size": 63488 00:15:55.384 } 00:15:55.384 ] 00:15:55.384 }' 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.384 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.643 [2024-09-30 12:33:07.475902] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:55.643 "name": "raid_bdev1", 00:15:55.643 "aliases": [ 00:15:55.643 "c928cd88-b956-42aa-baa9-068077ceaf5d" 00:15:55.643 ], 00:15:55.643 "product_name": "Raid Volume", 00:15:55.643 "block_size": 512, 00:15:55.643 "num_blocks": 126976, 00:15:55.643 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:55.643 "assigned_rate_limits": { 00:15:55.643 "rw_ios_per_sec": 0, 00:15:55.643 "rw_mbytes_per_sec": 0, 00:15:55.643 "r_mbytes_per_sec": 0, 00:15:55.643 "w_mbytes_per_sec": 0 00:15:55.643 }, 00:15:55.643 "claimed": false, 00:15:55.643 "zoned": false, 00:15:55.643 "supported_io_types": { 00:15:55.643 "read": true, 00:15:55.643 "write": true, 00:15:55.643 "unmap": false, 00:15:55.643 "flush": false, 00:15:55.643 "reset": true, 00:15:55.643 "nvme_admin": false, 00:15:55.643 "nvme_io": false, 00:15:55.643 "nvme_io_md": false, 00:15:55.643 "write_zeroes": true, 00:15:55.643 "zcopy": false, 00:15:55.643 "get_zone_info": false, 00:15:55.643 "zone_management": false, 00:15:55.643 "zone_append": false, 00:15:55.643 "compare": false, 00:15:55.643 "compare_and_write": false, 00:15:55.643 "abort": false, 00:15:55.643 "seek_hole": false, 00:15:55.643 "seek_data": false, 00:15:55.643 "copy": false, 00:15:55.643 "nvme_iov_md": false 00:15:55.643 }, 00:15:55.643 "driver_specific": { 00:15:55.643 "raid": { 00:15:55.643 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:55.643 "strip_size_kb": 64, 00:15:55.643 "state": "online", 00:15:55.643 "raid_level": "raid5f", 00:15:55.643 "superblock": true, 00:15:55.643 "num_base_bdevs": 3, 00:15:55.643 "num_base_bdevs_discovered": 3, 00:15:55.643 "num_base_bdevs_operational": 3, 00:15:55.643 "base_bdevs_list": [ 00:15:55.643 { 00:15:55.643 "name": "pt1", 00:15:55.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.643 "is_configured": true, 00:15:55.643 "data_offset": 2048, 00:15:55.643 "data_size": 63488 00:15:55.643 }, 00:15:55.643 { 00:15:55.643 "name": "pt2", 00:15:55.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.643 "is_configured": true, 00:15:55.643 "data_offset": 2048, 00:15:55.643 "data_size": 63488 00:15:55.643 }, 00:15:55.643 { 00:15:55.643 "name": "pt3", 00:15:55.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.643 "is_configured": true, 00:15:55.643 "data_offset": 2048, 00:15:55.643 "data_size": 63488 00:15:55.643 } 00:15:55.643 ] 00:15:55.643 } 00:15:55.643 } 00:15:55.643 }' 00:15:55.643 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:55.903 pt2 00:15:55.903 pt3' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.903 [2024-09-30 12:33:07.739415] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c928cd88-b956-42aa-baa9-068077ceaf5d 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c928cd88-b956-42aa-baa9-068077ceaf5d ']' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.903 [2024-09-30 12:33:07.783180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.903 [2024-09-30 12:33:07.783246] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.903 [2024-09-30 12:33:07.783324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.903 [2024-09-30 12:33:07.783415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.903 [2024-09-30 12:33:07.783464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.903 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 [2024-09-30 12:33:07.938935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:56.164 [2024-09-30 12:33:07.940648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:56.164 [2024-09-30 12:33:07.940697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:56.164 [2024-09-30 12:33:07.940736] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:56.164 [2024-09-30 12:33:07.940786] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:56.164 [2024-09-30 12:33:07.940819] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:56.164 [2024-09-30 12:33:07.940835] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.164 [2024-09-30 12:33:07.940845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:56.164 request: 00:15:56.164 { 00:15:56.164 "name": "raid_bdev1", 00:15:56.164 "raid_level": "raid5f", 00:15:56.164 "base_bdevs": [ 00:15:56.164 "malloc1", 00:15:56.164 "malloc2", 00:15:56.164 "malloc3" 00:15:56.164 ], 00:15:56.164 "strip_size_kb": 64, 00:15:56.164 "superblock": false, 00:15:56.164 "method": "bdev_raid_create", 00:15:56.164 "req_id": 1 00:15:56.164 } 00:15:56.164 Got JSON-RPC error response 00:15:56.164 response: 00:15:56.164 { 00:15:56.164 "code": -17, 00:15:56.164 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:56.164 } 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.164 12:33:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 [2024-09-30 12:33:08.006810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:56.164 [2024-09-30 12:33:08.006896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.164 [2024-09-30 12:33:08.006929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:56.164 [2024-09-30 12:33:08.006952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.164 [2024-09-30 12:33:08.008908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.164 [2024-09-30 12:33:08.008976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:56.164 [2024-09-30 12:33:08.009054] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:56.164 [2024-09-30 12:33:08.009121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:56.164 pt1 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.424 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.424 "name": "raid_bdev1", 00:15:56.424 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:56.424 "strip_size_kb": 64, 00:15:56.424 "state": "configuring", 00:15:56.424 "raid_level": "raid5f", 00:15:56.424 "superblock": true, 00:15:56.424 "num_base_bdevs": 3, 00:15:56.424 "num_base_bdevs_discovered": 1, 00:15:56.424 "num_base_bdevs_operational": 3, 00:15:56.424 "base_bdevs_list": [ 00:15:56.424 { 00:15:56.424 "name": "pt1", 00:15:56.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.424 "is_configured": true, 00:15:56.424 "data_offset": 2048, 00:15:56.424 "data_size": 63488 00:15:56.424 }, 00:15:56.424 { 00:15:56.424 "name": null, 00:15:56.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.424 "is_configured": false, 00:15:56.424 "data_offset": 2048, 00:15:56.424 "data_size": 63488 00:15:56.424 }, 00:15:56.424 { 00:15:56.424 "name": null, 00:15:56.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.424 "is_configured": false, 00:15:56.424 "data_offset": 2048, 00:15:56.424 "data_size": 63488 00:15:56.424 } 00:15:56.424 ] 00:15:56.424 }' 00:15:56.424 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.424 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.684 [2024-09-30 12:33:08.477972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.684 [2024-09-30 12:33:08.478020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.684 [2024-09-30 12:33:08.478037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:56.684 [2024-09-30 12:33:08.478045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.684 [2024-09-30 12:33:08.478352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.684 [2024-09-30 12:33:08.478367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.684 [2024-09-30 12:33:08.478421] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:56.684 [2024-09-30 12:33:08.478436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.684 pt2 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.684 [2024-09-30 12:33:08.489978] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.684 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.684 "name": "raid_bdev1", 00:15:56.684 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:56.684 "strip_size_kb": 64, 00:15:56.684 "state": "configuring", 00:15:56.684 "raid_level": "raid5f", 00:15:56.684 "superblock": true, 00:15:56.684 "num_base_bdevs": 3, 00:15:56.684 "num_base_bdevs_discovered": 1, 00:15:56.685 "num_base_bdevs_operational": 3, 00:15:56.685 "base_bdevs_list": [ 00:15:56.685 { 00:15:56.685 "name": "pt1", 00:15:56.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.685 "is_configured": true, 00:15:56.685 "data_offset": 2048, 00:15:56.685 "data_size": 63488 00:15:56.685 }, 00:15:56.685 { 00:15:56.685 "name": null, 00:15:56.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.685 "is_configured": false, 00:15:56.685 "data_offset": 0, 00:15:56.685 "data_size": 63488 00:15:56.685 }, 00:15:56.685 { 00:15:56.685 "name": null, 00:15:56.685 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.685 "is_configured": false, 00:15:56.685 "data_offset": 2048, 00:15:56.685 "data_size": 63488 00:15:56.685 } 00:15:56.685 ] 00:15:56.685 }' 00:15:56.685 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.685 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.256 [2024-09-30 12:33:08.977115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.256 [2024-09-30 12:33:08.977207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.256 [2024-09-30 12:33:08.977235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:57.256 [2024-09-30 12:33:08.977261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.256 [2024-09-30 12:33:08.977590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.256 [2024-09-30 12:33:08.977648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.256 [2024-09-30 12:33:08.977721] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:57.256 [2024-09-30 12:33:08.977780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.256 pt2 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.256 [2024-09-30 12:33:08.989122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:57.256 [2024-09-30 12:33:08.989208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.256 [2024-09-30 12:33:08.989234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:57.256 [2024-09-30 12:33:08.989265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.256 [2024-09-30 12:33:08.989576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.256 [2024-09-30 12:33:08.989635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:57.256 [2024-09-30 12:33:08.989710] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:57.256 [2024-09-30 12:33:08.989768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:57.256 [2024-09-30 12:33:08.989904] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:57.256 [2024-09-30 12:33:08.989944] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:57.256 [2024-09-30 12:33:08.990178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:57.256 [2024-09-30 12:33:08.995050] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:57.256 [2024-09-30 12:33:08.995068] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:57.256 [2024-09-30 12:33:08.995221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.256 pt3 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.256 12:33:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.256 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.256 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.256 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.256 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.256 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.256 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.256 "name": "raid_bdev1", 00:15:57.256 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:57.256 "strip_size_kb": 64, 00:15:57.256 "state": "online", 00:15:57.256 "raid_level": "raid5f", 00:15:57.256 "superblock": true, 00:15:57.256 "num_base_bdevs": 3, 00:15:57.256 "num_base_bdevs_discovered": 3, 00:15:57.256 "num_base_bdevs_operational": 3, 00:15:57.256 "base_bdevs_list": [ 00:15:57.256 { 00:15:57.256 "name": "pt1", 00:15:57.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.256 "is_configured": true, 00:15:57.256 "data_offset": 2048, 00:15:57.256 "data_size": 63488 00:15:57.256 }, 00:15:57.256 { 00:15:57.256 "name": "pt2", 00:15:57.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.256 "is_configured": true, 00:15:57.256 "data_offset": 2048, 00:15:57.256 "data_size": 63488 00:15:57.256 }, 00:15:57.256 { 00:15:57.256 "name": "pt3", 00:15:57.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.256 "is_configured": true, 00:15:57.256 "data_offset": 2048, 00:15:57.256 "data_size": 63488 00:15:57.256 } 00:15:57.256 ] 00:15:57.256 }' 00:15:57.256 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.256 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.516 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:57.516 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:57.516 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:57.516 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:57.516 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:57.516 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:57.516 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.516 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:57.516 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.516 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.776 [2024-09-30 12:33:09.412601] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.776 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.776 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:57.776 "name": "raid_bdev1", 00:15:57.776 "aliases": [ 00:15:57.776 "c928cd88-b956-42aa-baa9-068077ceaf5d" 00:15:57.776 ], 00:15:57.776 "product_name": "Raid Volume", 00:15:57.776 "block_size": 512, 00:15:57.776 "num_blocks": 126976, 00:15:57.776 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:57.776 "assigned_rate_limits": { 00:15:57.776 "rw_ios_per_sec": 0, 00:15:57.776 "rw_mbytes_per_sec": 0, 00:15:57.776 "r_mbytes_per_sec": 0, 00:15:57.776 "w_mbytes_per_sec": 0 00:15:57.776 }, 00:15:57.776 "claimed": false, 00:15:57.776 "zoned": false, 00:15:57.776 "supported_io_types": { 00:15:57.776 "read": true, 00:15:57.776 "write": true, 00:15:57.776 "unmap": false, 00:15:57.776 "flush": false, 00:15:57.776 "reset": true, 00:15:57.776 "nvme_admin": false, 00:15:57.776 "nvme_io": false, 00:15:57.776 "nvme_io_md": false, 00:15:57.776 "write_zeroes": true, 00:15:57.776 "zcopy": false, 00:15:57.776 "get_zone_info": false, 00:15:57.776 "zone_management": false, 00:15:57.776 "zone_append": false, 00:15:57.776 "compare": false, 00:15:57.776 "compare_and_write": false, 00:15:57.776 "abort": false, 00:15:57.776 "seek_hole": false, 00:15:57.776 "seek_data": false, 00:15:57.776 "copy": false, 00:15:57.776 "nvme_iov_md": false 00:15:57.776 }, 00:15:57.776 "driver_specific": { 00:15:57.776 "raid": { 00:15:57.776 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:57.776 "strip_size_kb": 64, 00:15:57.776 "state": "online", 00:15:57.776 "raid_level": "raid5f", 00:15:57.776 "superblock": true, 00:15:57.776 "num_base_bdevs": 3, 00:15:57.776 "num_base_bdevs_discovered": 3, 00:15:57.776 "num_base_bdevs_operational": 3, 00:15:57.776 "base_bdevs_list": [ 00:15:57.776 { 00:15:57.776 "name": "pt1", 00:15:57.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.776 "is_configured": true, 00:15:57.776 "data_offset": 2048, 00:15:57.776 "data_size": 63488 00:15:57.776 }, 00:15:57.776 { 00:15:57.776 "name": "pt2", 00:15:57.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.777 "is_configured": true, 00:15:57.777 "data_offset": 2048, 00:15:57.777 "data_size": 63488 00:15:57.777 }, 00:15:57.777 { 00:15:57.777 "name": "pt3", 00:15:57.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.777 "is_configured": true, 00:15:57.777 "data_offset": 2048, 00:15:57.777 "data_size": 63488 00:15:57.777 } 00:15:57.777 ] 00:15:57.777 } 00:15:57.777 } 00:15:57.777 }' 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:57.777 pt2 00:15:57.777 pt3' 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.037 [2024-09-30 12:33:09.696080] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c928cd88-b956-42aa-baa9-068077ceaf5d '!=' c928cd88-b956-42aa-baa9-068077ceaf5d ']' 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.037 [2024-09-30 12:33:09.739892] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.037 "name": "raid_bdev1", 00:15:58.037 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:58.037 "strip_size_kb": 64, 00:15:58.037 "state": "online", 00:15:58.037 "raid_level": "raid5f", 00:15:58.037 "superblock": true, 00:15:58.037 "num_base_bdevs": 3, 00:15:58.037 "num_base_bdevs_discovered": 2, 00:15:58.037 "num_base_bdevs_operational": 2, 00:15:58.037 "base_bdevs_list": [ 00:15:58.037 { 00:15:58.037 "name": null, 00:15:58.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.037 "is_configured": false, 00:15:58.037 "data_offset": 0, 00:15:58.037 "data_size": 63488 00:15:58.037 }, 00:15:58.037 { 00:15:58.037 "name": "pt2", 00:15:58.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.037 "is_configured": true, 00:15:58.037 "data_offset": 2048, 00:15:58.037 "data_size": 63488 00:15:58.037 }, 00:15:58.037 { 00:15:58.037 "name": "pt3", 00:15:58.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.037 "is_configured": true, 00:15:58.037 "data_offset": 2048, 00:15:58.037 "data_size": 63488 00:15:58.037 } 00:15:58.037 ] 00:15:58.037 }' 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.037 12:33:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.297 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:58.297 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.297 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.297 [2024-09-30 12:33:10.183244] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.297 [2024-09-30 12:33:10.183312] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.297 [2024-09-30 12:33:10.183388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.297 [2024-09-30 12:33:10.183444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.297 [2024-09-30 12:33:10.183499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:58.297 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.557 [2024-09-30 12:33:10.267090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.557 [2024-09-30 12:33:10.267135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.557 [2024-09-30 12:33:10.267147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:58.557 [2024-09-30 12:33:10.267156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.557 [2024-09-30 12:33:10.269004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.557 [2024-09-30 12:33:10.269043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.557 [2024-09-30 12:33:10.269093] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:58.557 [2024-09-30 12:33:10.269136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.557 pt2 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.557 "name": "raid_bdev1", 00:15:58.557 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:58.557 "strip_size_kb": 64, 00:15:58.557 "state": "configuring", 00:15:58.557 "raid_level": "raid5f", 00:15:58.557 "superblock": true, 00:15:58.557 "num_base_bdevs": 3, 00:15:58.557 "num_base_bdevs_discovered": 1, 00:15:58.557 "num_base_bdevs_operational": 2, 00:15:58.557 "base_bdevs_list": [ 00:15:58.557 { 00:15:58.557 "name": null, 00:15:58.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.557 "is_configured": false, 00:15:58.557 "data_offset": 2048, 00:15:58.557 "data_size": 63488 00:15:58.557 }, 00:15:58.557 { 00:15:58.557 "name": "pt2", 00:15:58.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.557 "is_configured": true, 00:15:58.557 "data_offset": 2048, 00:15:58.557 "data_size": 63488 00:15:58.557 }, 00:15:58.557 { 00:15:58.557 "name": null, 00:15:58.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.557 "is_configured": false, 00:15:58.557 "data_offset": 2048, 00:15:58.557 "data_size": 63488 00:15:58.557 } 00:15:58.557 ] 00:15:58.557 }' 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.557 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.817 [2024-09-30 12:33:10.670396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:58.817 [2024-09-30 12:33:10.670489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.817 [2024-09-30 12:33:10.670519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:58.817 [2024-09-30 12:33:10.670547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.817 [2024-09-30 12:33:10.670888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.817 [2024-09-30 12:33:10.670948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:58.817 [2024-09-30 12:33:10.671022] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:58.817 [2024-09-30 12:33:10.671080] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:58.817 [2024-09-30 12:33:10.671194] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:58.817 [2024-09-30 12:33:10.671232] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:58.817 [2024-09-30 12:33:10.671447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:58.817 [2024-09-30 12:33:10.676616] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:58.817 [2024-09-30 12:33:10.676668] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:58.817 [2024-09-30 12:33:10.676969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.817 pt3 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.817 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.818 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.818 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.818 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.077 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.077 "name": "raid_bdev1", 00:15:59.077 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:59.077 "strip_size_kb": 64, 00:15:59.077 "state": "online", 00:15:59.077 "raid_level": "raid5f", 00:15:59.077 "superblock": true, 00:15:59.077 "num_base_bdevs": 3, 00:15:59.077 "num_base_bdevs_discovered": 2, 00:15:59.077 "num_base_bdevs_operational": 2, 00:15:59.077 "base_bdevs_list": [ 00:15:59.077 { 00:15:59.077 "name": null, 00:15:59.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.077 "is_configured": false, 00:15:59.077 "data_offset": 2048, 00:15:59.077 "data_size": 63488 00:15:59.077 }, 00:15:59.077 { 00:15:59.077 "name": "pt2", 00:15:59.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.077 "is_configured": true, 00:15:59.077 "data_offset": 2048, 00:15:59.077 "data_size": 63488 00:15:59.077 }, 00:15:59.077 { 00:15:59.077 "name": "pt3", 00:15:59.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.077 "is_configured": true, 00:15:59.077 "data_offset": 2048, 00:15:59.077 "data_size": 63488 00:15:59.077 } 00:15:59.077 ] 00:15:59.077 }' 00:15:59.077 12:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.077 12:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.337 [2024-09-30 12:33:11.158158] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.337 [2024-09-30 12:33:11.158228] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.337 [2024-09-30 12:33:11.158278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.337 [2024-09-30 12:33:11.158322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.337 [2024-09-30 12:33:11.158330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.337 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.597 [2024-09-30 12:33:11.234052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.597 [2024-09-30 12:33:11.234099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.597 [2024-09-30 12:33:11.234113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:59.597 [2024-09-30 12:33:11.234122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.597 [2024-09-30 12:33:11.236180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.597 [2024-09-30 12:33:11.236216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.597 [2024-09-30 12:33:11.236274] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:59.597 [2024-09-30 12:33:11.236313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.597 [2024-09-30 12:33:11.236411] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:59.597 [2024-09-30 12:33:11.236423] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.597 [2024-09-30 12:33:11.236437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:59.597 [2024-09-30 12:33:11.236499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.597 pt1 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.597 "name": "raid_bdev1", 00:15:59.597 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:59.597 "strip_size_kb": 64, 00:15:59.597 "state": "configuring", 00:15:59.597 "raid_level": "raid5f", 00:15:59.597 "superblock": true, 00:15:59.597 "num_base_bdevs": 3, 00:15:59.597 "num_base_bdevs_discovered": 1, 00:15:59.597 "num_base_bdevs_operational": 2, 00:15:59.597 "base_bdevs_list": [ 00:15:59.597 { 00:15:59.597 "name": null, 00:15:59.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.597 "is_configured": false, 00:15:59.597 "data_offset": 2048, 00:15:59.597 "data_size": 63488 00:15:59.597 }, 00:15:59.597 { 00:15:59.597 "name": "pt2", 00:15:59.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.597 "is_configured": true, 00:15:59.597 "data_offset": 2048, 00:15:59.597 "data_size": 63488 00:15:59.597 }, 00:15:59.597 { 00:15:59.597 "name": null, 00:15:59.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.597 "is_configured": false, 00:15:59.597 "data_offset": 2048, 00:15:59.597 "data_size": 63488 00:15:59.597 } 00:15:59.597 ] 00:15:59.597 }' 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.597 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.858 [2024-09-30 12:33:11.653324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:59.858 [2024-09-30 12:33:11.653416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.858 [2024-09-30 12:33:11.653448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:59.858 [2024-09-30 12:33:11.653472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.858 [2024-09-30 12:33:11.653840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.858 [2024-09-30 12:33:11.653898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:59.858 [2024-09-30 12:33:11.653987] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:59.858 [2024-09-30 12:33:11.654030] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:59.858 [2024-09-30 12:33:11.654139] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:59.858 [2024-09-30 12:33:11.654179] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:59.858 [2024-09-30 12:33:11.654428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:59.858 [2024-09-30 12:33:11.659813] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:59.858 [2024-09-30 12:33:11.659869] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:59.858 [2024-09-30 12:33:11.660111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.858 pt3 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.858 "name": "raid_bdev1", 00:15:59.858 "uuid": "c928cd88-b956-42aa-baa9-068077ceaf5d", 00:15:59.858 "strip_size_kb": 64, 00:15:59.858 "state": "online", 00:15:59.858 "raid_level": "raid5f", 00:15:59.858 "superblock": true, 00:15:59.858 "num_base_bdevs": 3, 00:15:59.858 "num_base_bdevs_discovered": 2, 00:15:59.858 "num_base_bdevs_operational": 2, 00:15:59.858 "base_bdevs_list": [ 00:15:59.858 { 00:15:59.858 "name": null, 00:15:59.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.858 "is_configured": false, 00:15:59.858 "data_offset": 2048, 00:15:59.858 "data_size": 63488 00:15:59.858 }, 00:15:59.858 { 00:15:59.858 "name": "pt2", 00:15:59.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.858 "is_configured": true, 00:15:59.858 "data_offset": 2048, 00:15:59.858 "data_size": 63488 00:15:59.858 }, 00:15:59.858 { 00:15:59.858 "name": "pt3", 00:15:59.858 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.858 "is_configured": true, 00:15:59.858 "data_offset": 2048, 00:15:59.858 "data_size": 63488 00:15:59.858 } 00:15:59.858 ] 00:15:59.858 }' 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.858 12:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:00.428 [2024-09-30 12:33:12.197112] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c928cd88-b956-42aa-baa9-068077ceaf5d '!=' c928cd88-b956-42aa-baa9-068077ceaf5d ']' 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80984 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 80984 ']' 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 80984 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80984 00:16:00.428 killing process with pid 80984 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80984' 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 80984 00:16:00.428 [2024-09-30 12:33:12.277882] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.428 [2024-09-30 12:33:12.277957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.428 [2024-09-30 12:33:12.278001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.428 [2024-09-30 12:33:12.278012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:00.428 12:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 80984 00:16:00.688 [2024-09-30 12:33:12.558331] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.071 12:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:02.071 00:16:02.071 real 0m7.833s 00:16:02.071 user 0m12.153s 00:16:02.071 sys 0m1.459s 00:16:02.071 12:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.071 ************************************ 00:16:02.071 END TEST raid5f_superblock_test 00:16:02.071 ************************************ 00:16:02.071 12:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.071 12:33:13 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:02.071 12:33:13 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:02.071 12:33:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:02.071 12:33:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.071 12:33:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.071 ************************************ 00:16:02.071 START TEST raid5f_rebuild_test 00:16:02.071 ************************************ 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81422 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81422 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 81422 ']' 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.071 12:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.071 [2024-09-30 12:33:13.933728] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:02.071 [2024-09-30 12:33:13.933913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:02.071 Zero copy mechanism will not be used. 00:16:02.071 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81422 ] 00:16:02.331 [2024-09-30 12:33:14.096489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.591 [2024-09-30 12:33:14.282729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.591 [2024-09-30 12:33:14.448928] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.591 [2024-09-30 12:33:14.449044] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.163 BaseBdev1_malloc 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.163 [2024-09-30 12:33:14.819335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:03.163 [2024-09-30 12:33:14.819414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.163 [2024-09-30 12:33:14.819437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:03.163 [2024-09-30 12:33:14.819450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.163 [2024-09-30 12:33:14.821369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.163 [2024-09-30 12:33:14.821408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:03.163 BaseBdev1 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.163 BaseBdev2_malloc 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.163 [2024-09-30 12:33:14.902214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:03.163 [2024-09-30 12:33:14.902358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.163 [2024-09-30 12:33:14.902382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:03.163 [2024-09-30 12:33:14.902392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.163 [2024-09-30 12:33:14.904310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.163 [2024-09-30 12:33:14.904350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:03.163 BaseBdev2 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.163 BaseBdev3_malloc 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.163 [2024-09-30 12:33:14.954608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:03.163 [2024-09-30 12:33:14.954660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.163 [2024-09-30 12:33:14.954679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:03.163 [2024-09-30 12:33:14.954690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.163 [2024-09-30 12:33:14.956807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.163 [2024-09-30 12:33:14.956848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:03.163 BaseBdev3 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.163 spare_malloc 00:16:03.163 12:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.163 spare_delay 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.163 [2024-09-30 12:33:15.019266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:03.163 [2024-09-30 12:33:15.019321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.163 [2024-09-30 12:33:15.019337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:03.163 [2024-09-30 12:33:15.019347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.163 [2024-09-30 12:33:15.021242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.163 [2024-09-30 12:33:15.021286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:03.163 spare 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.163 [2024-09-30 12:33:15.031311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.163 [2024-09-30 12:33:15.032950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.163 [2024-09-30 12:33:15.033090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.163 [2024-09-30 12:33:15.033171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:03.163 [2024-09-30 12:33:15.033181] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:03.163 [2024-09-30 12:33:15.033403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:03.163 [2024-09-30 12:33:15.038655] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:03.163 [2024-09-30 12:33:15.038679] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:03.163 [2024-09-30 12:33:15.038861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.163 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.164 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.164 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.164 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.164 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.164 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.164 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.164 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.164 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.164 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.424 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.424 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.424 "name": "raid_bdev1", 00:16:03.424 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:03.424 "strip_size_kb": 64, 00:16:03.424 "state": "online", 00:16:03.424 "raid_level": "raid5f", 00:16:03.424 "superblock": false, 00:16:03.424 "num_base_bdevs": 3, 00:16:03.424 "num_base_bdevs_discovered": 3, 00:16:03.424 "num_base_bdevs_operational": 3, 00:16:03.424 "base_bdevs_list": [ 00:16:03.424 { 00:16:03.424 "name": "BaseBdev1", 00:16:03.424 "uuid": "3e55a577-0ea6-5acb-8afb-88f1bbde2c1d", 00:16:03.424 "is_configured": true, 00:16:03.424 "data_offset": 0, 00:16:03.424 "data_size": 65536 00:16:03.424 }, 00:16:03.424 { 00:16:03.424 "name": "BaseBdev2", 00:16:03.424 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:03.424 "is_configured": true, 00:16:03.424 "data_offset": 0, 00:16:03.424 "data_size": 65536 00:16:03.424 }, 00:16:03.424 { 00:16:03.424 "name": "BaseBdev3", 00:16:03.424 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:03.424 "is_configured": true, 00:16:03.424 "data_offset": 0, 00:16:03.424 "data_size": 65536 00:16:03.424 } 00:16:03.424 ] 00:16:03.424 }' 00:16:03.424 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.424 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 [2024-09-30 12:33:15.464014] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:03.684 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:03.685 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:03.685 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.685 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:03.685 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.685 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:03.685 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.685 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:03.685 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.685 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.685 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:03.945 [2024-09-30 12:33:15.739419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:03.945 /dev/nbd0 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.945 1+0 records in 00:16:03.945 1+0 records out 00:16:03.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418065 s, 9.8 MB/s 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:03.945 12:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:04.515 512+0 records in 00:16:04.515 512+0 records out 00:16:04.515 67108864 bytes (67 MB, 64 MiB) copied, 0.314946 s, 213 MB/s 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:04.515 [2024-09-30 12:33:16.315203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.515 [2024-09-30 12:33:16.369161] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.515 12:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.775 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.775 "name": "raid_bdev1", 00:16:04.775 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:04.775 "strip_size_kb": 64, 00:16:04.775 "state": "online", 00:16:04.775 "raid_level": "raid5f", 00:16:04.775 "superblock": false, 00:16:04.775 "num_base_bdevs": 3, 00:16:04.775 "num_base_bdevs_discovered": 2, 00:16:04.775 "num_base_bdevs_operational": 2, 00:16:04.775 "base_bdevs_list": [ 00:16:04.775 { 00:16:04.775 "name": null, 00:16:04.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.775 "is_configured": false, 00:16:04.775 "data_offset": 0, 00:16:04.775 "data_size": 65536 00:16:04.775 }, 00:16:04.775 { 00:16:04.775 "name": "BaseBdev2", 00:16:04.775 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:04.775 "is_configured": true, 00:16:04.775 "data_offset": 0, 00:16:04.775 "data_size": 65536 00:16:04.775 }, 00:16:04.775 { 00:16:04.775 "name": "BaseBdev3", 00:16:04.775 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:04.775 "is_configured": true, 00:16:04.775 "data_offset": 0, 00:16:04.775 "data_size": 65536 00:16:04.775 } 00:16:04.775 ] 00:16:04.775 }' 00:16:04.775 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.775 12:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.035 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.035 12:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.035 12:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.035 [2024-09-30 12:33:16.824468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.035 [2024-09-30 12:33:16.839659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:05.035 12:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.035 12:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:05.035 [2024-09-30 12:33:16.847068] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.041 "name": "raid_bdev1", 00:16:06.041 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:06.041 "strip_size_kb": 64, 00:16:06.041 "state": "online", 00:16:06.041 "raid_level": "raid5f", 00:16:06.041 "superblock": false, 00:16:06.041 "num_base_bdevs": 3, 00:16:06.041 "num_base_bdevs_discovered": 3, 00:16:06.041 "num_base_bdevs_operational": 3, 00:16:06.041 "process": { 00:16:06.041 "type": "rebuild", 00:16:06.041 "target": "spare", 00:16:06.041 "progress": { 00:16:06.041 "blocks": 20480, 00:16:06.041 "percent": 15 00:16:06.041 } 00:16:06.041 }, 00:16:06.041 "base_bdevs_list": [ 00:16:06.041 { 00:16:06.041 "name": "spare", 00:16:06.041 "uuid": "b18fe216-6392-54ef-adf4-972e4f63f6ae", 00:16:06.041 "is_configured": true, 00:16:06.041 "data_offset": 0, 00:16:06.041 "data_size": 65536 00:16:06.041 }, 00:16:06.041 { 00:16:06.041 "name": "BaseBdev2", 00:16:06.041 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:06.041 "is_configured": true, 00:16:06.041 "data_offset": 0, 00:16:06.041 "data_size": 65536 00:16:06.041 }, 00:16:06.041 { 00:16:06.041 "name": "BaseBdev3", 00:16:06.041 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:06.041 "is_configured": true, 00:16:06.041 "data_offset": 0, 00:16:06.041 "data_size": 65536 00:16:06.041 } 00:16:06.041 ] 00:16:06.041 }' 00:16:06.041 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.302 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.302 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.302 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.302 12:33:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:06.302 12:33:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.302 12:33:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.302 [2024-09-30 12:33:17.993844] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.302 [2024-09-30 12:33:18.053940] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:06.302 [2024-09-30 12:33:18.053991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.302 [2024-09-30 12:33:18.054009] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.302 [2024-09-30 12:33:18.054016] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.302 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.302 "name": "raid_bdev1", 00:16:06.302 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:06.302 "strip_size_kb": 64, 00:16:06.302 "state": "online", 00:16:06.302 "raid_level": "raid5f", 00:16:06.302 "superblock": false, 00:16:06.302 "num_base_bdevs": 3, 00:16:06.302 "num_base_bdevs_discovered": 2, 00:16:06.302 "num_base_bdevs_operational": 2, 00:16:06.302 "base_bdevs_list": [ 00:16:06.302 { 00:16:06.302 "name": null, 00:16:06.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.302 "is_configured": false, 00:16:06.302 "data_offset": 0, 00:16:06.302 "data_size": 65536 00:16:06.302 }, 00:16:06.302 { 00:16:06.302 "name": "BaseBdev2", 00:16:06.302 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:06.303 "is_configured": true, 00:16:06.303 "data_offset": 0, 00:16:06.303 "data_size": 65536 00:16:06.303 }, 00:16:06.303 { 00:16:06.303 "name": "BaseBdev3", 00:16:06.303 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:06.303 "is_configured": true, 00:16:06.303 "data_offset": 0, 00:16:06.303 "data_size": 65536 00:16:06.303 } 00:16:06.303 ] 00:16:06.303 }' 00:16:06.303 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.303 12:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.873 "name": "raid_bdev1", 00:16:06.873 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:06.873 "strip_size_kb": 64, 00:16:06.873 "state": "online", 00:16:06.873 "raid_level": "raid5f", 00:16:06.873 "superblock": false, 00:16:06.873 "num_base_bdevs": 3, 00:16:06.873 "num_base_bdevs_discovered": 2, 00:16:06.873 "num_base_bdevs_operational": 2, 00:16:06.873 "base_bdevs_list": [ 00:16:06.873 { 00:16:06.873 "name": null, 00:16:06.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.873 "is_configured": false, 00:16:06.873 "data_offset": 0, 00:16:06.873 "data_size": 65536 00:16:06.873 }, 00:16:06.873 { 00:16:06.873 "name": "BaseBdev2", 00:16:06.873 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:06.873 "is_configured": true, 00:16:06.873 "data_offset": 0, 00:16:06.873 "data_size": 65536 00:16:06.873 }, 00:16:06.873 { 00:16:06.873 "name": "BaseBdev3", 00:16:06.873 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:06.873 "is_configured": true, 00:16:06.873 "data_offset": 0, 00:16:06.873 "data_size": 65536 00:16:06.873 } 00:16:06.873 ] 00:16:06.873 }' 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.873 [2024-09-30 12:33:18.667508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.873 [2024-09-30 12:33:18.680521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.873 12:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:06.873 [2024-09-30 12:33:18.687094] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:07.812 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.812 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.812 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.812 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.812 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.812 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.812 12:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.812 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.812 12:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.073 "name": "raid_bdev1", 00:16:08.073 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:08.073 "strip_size_kb": 64, 00:16:08.073 "state": "online", 00:16:08.073 "raid_level": "raid5f", 00:16:08.073 "superblock": false, 00:16:08.073 "num_base_bdevs": 3, 00:16:08.073 "num_base_bdevs_discovered": 3, 00:16:08.073 "num_base_bdevs_operational": 3, 00:16:08.073 "process": { 00:16:08.073 "type": "rebuild", 00:16:08.073 "target": "spare", 00:16:08.073 "progress": { 00:16:08.073 "blocks": 20480, 00:16:08.073 "percent": 15 00:16:08.073 } 00:16:08.073 }, 00:16:08.073 "base_bdevs_list": [ 00:16:08.073 { 00:16:08.073 "name": "spare", 00:16:08.073 "uuid": "b18fe216-6392-54ef-adf4-972e4f63f6ae", 00:16:08.073 "is_configured": true, 00:16:08.073 "data_offset": 0, 00:16:08.073 "data_size": 65536 00:16:08.073 }, 00:16:08.073 { 00:16:08.073 "name": "BaseBdev2", 00:16:08.073 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:08.073 "is_configured": true, 00:16:08.073 "data_offset": 0, 00:16:08.073 "data_size": 65536 00:16:08.073 }, 00:16:08.073 { 00:16:08.073 "name": "BaseBdev3", 00:16:08.073 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:08.073 "is_configured": true, 00:16:08.073 "data_offset": 0, 00:16:08.073 "data_size": 65536 00:16:08.073 } 00:16:08.073 ] 00:16:08.073 }' 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=544 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.073 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.074 12:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.074 12:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.074 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.074 "name": "raid_bdev1", 00:16:08.074 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:08.074 "strip_size_kb": 64, 00:16:08.074 "state": "online", 00:16:08.074 "raid_level": "raid5f", 00:16:08.074 "superblock": false, 00:16:08.074 "num_base_bdevs": 3, 00:16:08.074 "num_base_bdevs_discovered": 3, 00:16:08.074 "num_base_bdevs_operational": 3, 00:16:08.074 "process": { 00:16:08.074 "type": "rebuild", 00:16:08.074 "target": "spare", 00:16:08.074 "progress": { 00:16:08.074 "blocks": 22528, 00:16:08.074 "percent": 17 00:16:08.074 } 00:16:08.074 }, 00:16:08.074 "base_bdevs_list": [ 00:16:08.074 { 00:16:08.074 "name": "spare", 00:16:08.074 "uuid": "b18fe216-6392-54ef-adf4-972e4f63f6ae", 00:16:08.074 "is_configured": true, 00:16:08.074 "data_offset": 0, 00:16:08.074 "data_size": 65536 00:16:08.074 }, 00:16:08.074 { 00:16:08.074 "name": "BaseBdev2", 00:16:08.074 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:08.074 "is_configured": true, 00:16:08.074 "data_offset": 0, 00:16:08.074 "data_size": 65536 00:16:08.074 }, 00:16:08.074 { 00:16:08.074 "name": "BaseBdev3", 00:16:08.074 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:08.074 "is_configured": true, 00:16:08.074 "data_offset": 0, 00:16:08.074 "data_size": 65536 00:16:08.074 } 00:16:08.074 ] 00:16:08.074 }' 00:16:08.074 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.074 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.074 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.074 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.074 12:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.454 12:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.454 "name": "raid_bdev1", 00:16:09.454 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:09.454 "strip_size_kb": 64, 00:16:09.454 "state": "online", 00:16:09.454 "raid_level": "raid5f", 00:16:09.454 "superblock": false, 00:16:09.454 "num_base_bdevs": 3, 00:16:09.454 "num_base_bdevs_discovered": 3, 00:16:09.454 "num_base_bdevs_operational": 3, 00:16:09.454 "process": { 00:16:09.454 "type": "rebuild", 00:16:09.454 "target": "spare", 00:16:09.454 "progress": { 00:16:09.454 "blocks": 45056, 00:16:09.454 "percent": 34 00:16:09.454 } 00:16:09.454 }, 00:16:09.454 "base_bdevs_list": [ 00:16:09.454 { 00:16:09.454 "name": "spare", 00:16:09.454 "uuid": "b18fe216-6392-54ef-adf4-972e4f63f6ae", 00:16:09.454 "is_configured": true, 00:16:09.454 "data_offset": 0, 00:16:09.454 "data_size": 65536 00:16:09.454 }, 00:16:09.454 { 00:16:09.454 "name": "BaseBdev2", 00:16:09.454 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:09.454 "is_configured": true, 00:16:09.454 "data_offset": 0, 00:16:09.454 "data_size": 65536 00:16:09.454 }, 00:16:09.454 { 00:16:09.454 "name": "BaseBdev3", 00:16:09.454 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:09.454 "is_configured": true, 00:16:09.454 "data_offset": 0, 00:16:09.454 "data_size": 65536 00:16:09.454 } 00:16:09.454 ] 00:16:09.454 }' 00:16:09.454 12:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.454 12:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.454 12:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.454 12:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.454 12:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.394 "name": "raid_bdev1", 00:16:10.394 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:10.394 "strip_size_kb": 64, 00:16:10.394 "state": "online", 00:16:10.394 "raid_level": "raid5f", 00:16:10.394 "superblock": false, 00:16:10.394 "num_base_bdevs": 3, 00:16:10.394 "num_base_bdevs_discovered": 3, 00:16:10.394 "num_base_bdevs_operational": 3, 00:16:10.394 "process": { 00:16:10.394 "type": "rebuild", 00:16:10.394 "target": "spare", 00:16:10.394 "progress": { 00:16:10.394 "blocks": 69632, 00:16:10.394 "percent": 53 00:16:10.394 } 00:16:10.394 }, 00:16:10.394 "base_bdevs_list": [ 00:16:10.394 { 00:16:10.394 "name": "spare", 00:16:10.394 "uuid": "b18fe216-6392-54ef-adf4-972e4f63f6ae", 00:16:10.394 "is_configured": true, 00:16:10.394 "data_offset": 0, 00:16:10.394 "data_size": 65536 00:16:10.394 }, 00:16:10.394 { 00:16:10.394 "name": "BaseBdev2", 00:16:10.394 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:10.394 "is_configured": true, 00:16:10.394 "data_offset": 0, 00:16:10.394 "data_size": 65536 00:16:10.394 }, 00:16:10.394 { 00:16:10.394 "name": "BaseBdev3", 00:16:10.394 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:10.394 "is_configured": true, 00:16:10.394 "data_offset": 0, 00:16:10.394 "data_size": 65536 00:16:10.394 } 00:16:10.394 ] 00:16:10.394 }' 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.394 12:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.776 "name": "raid_bdev1", 00:16:11.776 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:11.776 "strip_size_kb": 64, 00:16:11.776 "state": "online", 00:16:11.776 "raid_level": "raid5f", 00:16:11.776 "superblock": false, 00:16:11.776 "num_base_bdevs": 3, 00:16:11.776 "num_base_bdevs_discovered": 3, 00:16:11.776 "num_base_bdevs_operational": 3, 00:16:11.776 "process": { 00:16:11.776 "type": "rebuild", 00:16:11.776 "target": "spare", 00:16:11.776 "progress": { 00:16:11.776 "blocks": 92160, 00:16:11.776 "percent": 70 00:16:11.776 } 00:16:11.776 }, 00:16:11.776 "base_bdevs_list": [ 00:16:11.776 { 00:16:11.776 "name": "spare", 00:16:11.776 "uuid": "b18fe216-6392-54ef-adf4-972e4f63f6ae", 00:16:11.776 "is_configured": true, 00:16:11.776 "data_offset": 0, 00:16:11.776 "data_size": 65536 00:16:11.776 }, 00:16:11.776 { 00:16:11.776 "name": "BaseBdev2", 00:16:11.776 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:11.776 "is_configured": true, 00:16:11.776 "data_offset": 0, 00:16:11.776 "data_size": 65536 00:16:11.776 }, 00:16:11.776 { 00:16:11.776 "name": "BaseBdev3", 00:16:11.776 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:11.776 "is_configured": true, 00:16:11.776 "data_offset": 0, 00:16:11.776 "data_size": 65536 00:16:11.776 } 00:16:11.776 ] 00:16:11.776 }' 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.776 12:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.717 "name": "raid_bdev1", 00:16:12.717 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:12.717 "strip_size_kb": 64, 00:16:12.717 "state": "online", 00:16:12.717 "raid_level": "raid5f", 00:16:12.717 "superblock": false, 00:16:12.717 "num_base_bdevs": 3, 00:16:12.717 "num_base_bdevs_discovered": 3, 00:16:12.717 "num_base_bdevs_operational": 3, 00:16:12.717 "process": { 00:16:12.717 "type": "rebuild", 00:16:12.717 "target": "spare", 00:16:12.717 "progress": { 00:16:12.717 "blocks": 116736, 00:16:12.717 "percent": 89 00:16:12.717 } 00:16:12.717 }, 00:16:12.717 "base_bdevs_list": [ 00:16:12.717 { 00:16:12.717 "name": "spare", 00:16:12.717 "uuid": "b18fe216-6392-54ef-adf4-972e4f63f6ae", 00:16:12.717 "is_configured": true, 00:16:12.717 "data_offset": 0, 00:16:12.717 "data_size": 65536 00:16:12.717 }, 00:16:12.717 { 00:16:12.717 "name": "BaseBdev2", 00:16:12.717 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:12.717 "is_configured": true, 00:16:12.717 "data_offset": 0, 00:16:12.717 "data_size": 65536 00:16:12.717 }, 00:16:12.717 { 00:16:12.717 "name": "BaseBdev3", 00:16:12.717 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:12.717 "is_configured": true, 00:16:12.717 "data_offset": 0, 00:16:12.717 "data_size": 65536 00:16:12.717 } 00:16:12.717 ] 00:16:12.717 }' 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.717 12:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.285 [2024-09-30 12:33:25.119902] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:13.285 [2024-09-30 12:33:25.119969] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:13.285 [2024-09-30 12:33:25.120010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.853 "name": "raid_bdev1", 00:16:13.853 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:13.853 "strip_size_kb": 64, 00:16:13.853 "state": "online", 00:16:13.853 "raid_level": "raid5f", 00:16:13.853 "superblock": false, 00:16:13.853 "num_base_bdevs": 3, 00:16:13.853 "num_base_bdevs_discovered": 3, 00:16:13.853 "num_base_bdevs_operational": 3, 00:16:13.853 "base_bdevs_list": [ 00:16:13.853 { 00:16:13.853 "name": "spare", 00:16:13.853 "uuid": "b18fe216-6392-54ef-adf4-972e4f63f6ae", 00:16:13.853 "is_configured": true, 00:16:13.853 "data_offset": 0, 00:16:13.853 "data_size": 65536 00:16:13.853 }, 00:16:13.853 { 00:16:13.853 "name": "BaseBdev2", 00:16:13.853 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:13.853 "is_configured": true, 00:16:13.853 "data_offset": 0, 00:16:13.853 "data_size": 65536 00:16:13.853 }, 00:16:13.853 { 00:16:13.853 "name": "BaseBdev3", 00:16:13.853 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:13.853 "is_configured": true, 00:16:13.853 "data_offset": 0, 00:16:13.853 "data_size": 65536 00:16:13.853 } 00:16:13.853 ] 00:16:13.853 }' 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.853 12:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.112 "name": "raid_bdev1", 00:16:14.112 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:14.112 "strip_size_kb": 64, 00:16:14.112 "state": "online", 00:16:14.112 "raid_level": "raid5f", 00:16:14.112 "superblock": false, 00:16:14.112 "num_base_bdevs": 3, 00:16:14.112 "num_base_bdevs_discovered": 3, 00:16:14.112 "num_base_bdevs_operational": 3, 00:16:14.112 "base_bdevs_list": [ 00:16:14.112 { 00:16:14.112 "name": "spare", 00:16:14.112 "uuid": "b18fe216-6392-54ef-adf4-972e4f63f6ae", 00:16:14.112 "is_configured": true, 00:16:14.112 "data_offset": 0, 00:16:14.112 "data_size": 65536 00:16:14.112 }, 00:16:14.112 { 00:16:14.112 "name": "BaseBdev2", 00:16:14.112 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:14.112 "is_configured": true, 00:16:14.112 "data_offset": 0, 00:16:14.112 "data_size": 65536 00:16:14.112 }, 00:16:14.112 { 00:16:14.112 "name": "BaseBdev3", 00:16:14.112 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:14.112 "is_configured": true, 00:16:14.112 "data_offset": 0, 00:16:14.112 "data_size": 65536 00:16:14.112 } 00:16:14.112 ] 00:16:14.112 }' 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.112 "name": "raid_bdev1", 00:16:14.112 "uuid": "2a206d5f-6fd8-4462-b747-08e32cf23838", 00:16:14.112 "strip_size_kb": 64, 00:16:14.112 "state": "online", 00:16:14.112 "raid_level": "raid5f", 00:16:14.112 "superblock": false, 00:16:14.112 "num_base_bdevs": 3, 00:16:14.112 "num_base_bdevs_discovered": 3, 00:16:14.112 "num_base_bdevs_operational": 3, 00:16:14.112 "base_bdevs_list": [ 00:16:14.112 { 00:16:14.112 "name": "spare", 00:16:14.112 "uuid": "b18fe216-6392-54ef-adf4-972e4f63f6ae", 00:16:14.112 "is_configured": true, 00:16:14.112 "data_offset": 0, 00:16:14.112 "data_size": 65536 00:16:14.112 }, 00:16:14.112 { 00:16:14.112 "name": "BaseBdev2", 00:16:14.112 "uuid": "ba7ce151-8199-5808-b19a-5695722b3e2b", 00:16:14.112 "is_configured": true, 00:16:14.112 "data_offset": 0, 00:16:14.112 "data_size": 65536 00:16:14.112 }, 00:16:14.112 { 00:16:14.112 "name": "BaseBdev3", 00:16:14.112 "uuid": "eba0ad64-e348-57d1-9760-34ffe92a20fa", 00:16:14.112 "is_configured": true, 00:16:14.112 "data_offset": 0, 00:16:14.112 "data_size": 65536 00:16:14.112 } 00:16:14.112 ] 00:16:14.112 }' 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.112 12:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.681 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:14.681 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.681 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.681 [2024-09-30 12:33:26.284365] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.681 [2024-09-30 12:33:26.284395] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.681 [2024-09-30 12:33:26.284460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.681 [2024-09-30 12:33:26.284528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.681 [2024-09-30 12:33:26.284542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:14.681 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:14.682 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:14.682 /dev/nbd0 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:14.942 1+0 records in 00:16:14.942 1+0 records out 00:16:14.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444863 s, 9.2 MB/s 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:14.942 /dev/nbd1 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:14.942 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:14.942 1+0 records in 00:16:14.942 1+0 records out 00:16:14.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339177 s, 12.1 MB/s 00:16:15.202 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.202 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:15.202 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.202 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:15.202 12:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:15.202 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.202 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.202 12:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:15.202 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:15.202 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.202 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.202 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:15.202 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:15.202 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.202 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:15.461 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:15.461 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:15.461 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:15.461 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.461 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.461 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:15.461 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:15.461 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.461 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.461 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81422 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 81422 ']' 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 81422 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81422 00:16:15.721 killing process with pid 81422 00:16:15.721 Received shutdown signal, test time was about 60.000000 seconds 00:16:15.721 00:16:15.721 Latency(us) 00:16:15.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.721 =================================================================================================================== 00:16:15.721 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81422' 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 81422 00:16:15.721 [2024-09-30 12:33:27.469644] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.721 12:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 81422 00:16:15.981 [2024-09-30 12:33:27.838401] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.360 12:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:17.360 00:16:17.360 real 0m15.169s 00:16:17.360 user 0m18.621s 00:16:17.360 sys 0m1.940s 00:16:17.360 12:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:17.361 ************************************ 00:16:17.361 END TEST raid5f_rebuild_test 00:16:17.361 12:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.361 ************************************ 00:16:17.361 12:33:29 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:17.361 12:33:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:17.361 12:33:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:17.361 12:33:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.361 ************************************ 00:16:17.361 START TEST raid5f_rebuild_test_sb 00:16:17.361 ************************************ 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81857 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81857 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81857 ']' 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:17.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:17.361 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.361 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:17.361 Zero copy mechanism will not be used. 00:16:17.361 [2024-09-30 12:33:29.190419] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:17.361 [2024-09-30 12:33:29.190564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81857 ] 00:16:17.621 [2024-09-30 12:33:29.358303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.880 [2024-09-30 12:33:29.545835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.880 [2024-09-30 12:33:29.711864] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.880 [2024-09-30 12:33:29.711905] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.140 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:18.140 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:18.140 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.140 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:18.140 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.140 12:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.140 BaseBdev1_malloc 00:16:18.140 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.140 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:18.140 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.140 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.400 [2024-09-30 12:33:30.038189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:18.400 [2024-09-30 12:33:30.038263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.400 [2024-09-30 12:33:30.038286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:18.400 [2024-09-30 12:33:30.038299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.400 [2024-09-30 12:33:30.040196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.400 [2024-09-30 12:33:30.040235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:18.400 BaseBdev1 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.400 BaseBdev2_malloc 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.400 [2024-09-30 12:33:30.122362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:18.400 [2024-09-30 12:33:30.122421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.400 [2024-09-30 12:33:30.122440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:18.400 [2024-09-30 12:33:30.122451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.400 [2024-09-30 12:33:30.124318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.400 [2024-09-30 12:33:30.124355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:18.400 BaseBdev2 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.400 BaseBdev3_malloc 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.400 [2024-09-30 12:33:30.174423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:18.400 [2024-09-30 12:33:30.174474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.400 [2024-09-30 12:33:30.174495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:18.400 [2024-09-30 12:33:30.174505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.400 [2024-09-30 12:33:30.176334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.400 [2024-09-30 12:33:30.176373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:18.400 BaseBdev3 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.400 spare_malloc 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.400 spare_delay 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.400 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.400 [2024-09-30 12:33:30.236827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:18.400 [2024-09-30 12:33:30.236879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.401 [2024-09-30 12:33:30.236895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:18.401 [2024-09-30 12:33:30.236904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.401 [2024-09-30 12:33:30.238803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.401 [2024-09-30 12:33:30.238842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:18.401 spare 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.401 [2024-09-30 12:33:30.248900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.401 [2024-09-30 12:33:30.250500] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.401 [2024-09-30 12:33:30.250559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.401 [2024-09-30 12:33:30.250722] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:18.401 [2024-09-30 12:33:30.250734] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:18.401 [2024-09-30 12:33:30.250966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:18.401 [2024-09-30 12:33:30.255572] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:18.401 [2024-09-30 12:33:30.255598] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:18.401 [2024-09-30 12:33:30.255783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.401 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.661 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.661 "name": "raid_bdev1", 00:16:18.661 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:18.661 "strip_size_kb": 64, 00:16:18.661 "state": "online", 00:16:18.661 "raid_level": "raid5f", 00:16:18.661 "superblock": true, 00:16:18.661 "num_base_bdevs": 3, 00:16:18.661 "num_base_bdevs_discovered": 3, 00:16:18.661 "num_base_bdevs_operational": 3, 00:16:18.661 "base_bdevs_list": [ 00:16:18.661 { 00:16:18.661 "name": "BaseBdev1", 00:16:18.661 "uuid": "dc0a4901-6a2f-5f17-8f15-404945ca0cd3", 00:16:18.661 "is_configured": true, 00:16:18.661 "data_offset": 2048, 00:16:18.661 "data_size": 63488 00:16:18.661 }, 00:16:18.661 { 00:16:18.661 "name": "BaseBdev2", 00:16:18.661 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:18.661 "is_configured": true, 00:16:18.661 "data_offset": 2048, 00:16:18.661 "data_size": 63488 00:16:18.661 }, 00:16:18.661 { 00:16:18.661 "name": "BaseBdev3", 00:16:18.661 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:18.661 "is_configured": true, 00:16:18.661 "data_offset": 2048, 00:16:18.661 "data_size": 63488 00:16:18.661 } 00:16:18.661 ] 00:16:18.661 }' 00:16:18.661 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.661 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.921 [2024-09-30 12:33:30.673011] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:18.921 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:19.180 [2024-09-30 12:33:30.880577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:19.180 /dev/nbd0 00:16:19.180 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:19.180 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:19.180 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:19.180 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:19.180 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.181 1+0 records in 00:16:19.181 1+0 records out 00:16:19.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004201 s, 9.8 MB/s 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:19.181 12:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:19.749 496+0 records in 00:16:19.749 496+0 records out 00:16:19.749 65011712 bytes (65 MB, 62 MiB) copied, 0.555838 s, 117 MB/s 00:16:19.749 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:19.749 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.749 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:19.749 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:19.749 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:19.749 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.749 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:20.008 [2024-09-30 12:33:31.714840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.008 [2024-09-30 12:33:31.736881] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.008 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.008 "name": "raid_bdev1", 00:16:20.008 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:20.008 "strip_size_kb": 64, 00:16:20.008 "state": "online", 00:16:20.008 "raid_level": "raid5f", 00:16:20.008 "superblock": true, 00:16:20.008 "num_base_bdevs": 3, 00:16:20.008 "num_base_bdevs_discovered": 2, 00:16:20.008 "num_base_bdevs_operational": 2, 00:16:20.008 "base_bdevs_list": [ 00:16:20.008 { 00:16:20.008 "name": null, 00:16:20.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.008 "is_configured": false, 00:16:20.008 "data_offset": 0, 00:16:20.008 "data_size": 63488 00:16:20.008 }, 00:16:20.008 { 00:16:20.008 "name": "BaseBdev2", 00:16:20.009 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:20.009 "is_configured": true, 00:16:20.009 "data_offset": 2048, 00:16:20.009 "data_size": 63488 00:16:20.009 }, 00:16:20.009 { 00:16:20.009 "name": "BaseBdev3", 00:16:20.009 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:20.009 "is_configured": true, 00:16:20.009 "data_offset": 2048, 00:16:20.009 "data_size": 63488 00:16:20.009 } 00:16:20.009 ] 00:16:20.009 }' 00:16:20.009 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.009 12:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.578 12:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:20.578 12:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.578 12:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.578 [2024-09-30 12:33:32.224060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.578 [2024-09-30 12:33:32.237034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:20.578 12:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.578 12:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:20.578 [2024-09-30 12:33:32.243577] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.518 "name": "raid_bdev1", 00:16:21.518 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:21.518 "strip_size_kb": 64, 00:16:21.518 "state": "online", 00:16:21.518 "raid_level": "raid5f", 00:16:21.518 "superblock": true, 00:16:21.518 "num_base_bdevs": 3, 00:16:21.518 "num_base_bdevs_discovered": 3, 00:16:21.518 "num_base_bdevs_operational": 3, 00:16:21.518 "process": { 00:16:21.518 "type": "rebuild", 00:16:21.518 "target": "spare", 00:16:21.518 "progress": { 00:16:21.518 "blocks": 20480, 00:16:21.518 "percent": 16 00:16:21.518 } 00:16:21.518 }, 00:16:21.518 "base_bdevs_list": [ 00:16:21.518 { 00:16:21.518 "name": "spare", 00:16:21.518 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:21.518 "is_configured": true, 00:16:21.518 "data_offset": 2048, 00:16:21.518 "data_size": 63488 00:16:21.518 }, 00:16:21.518 { 00:16:21.518 "name": "BaseBdev2", 00:16:21.518 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:21.518 "is_configured": true, 00:16:21.518 "data_offset": 2048, 00:16:21.518 "data_size": 63488 00:16:21.518 }, 00:16:21.518 { 00:16:21.518 "name": "BaseBdev3", 00:16:21.518 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:21.518 "is_configured": true, 00:16:21.518 "data_offset": 2048, 00:16:21.518 "data_size": 63488 00:16:21.518 } 00:16:21.518 ] 00:16:21.518 }' 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.518 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.518 [2024-09-30 12:33:33.406948] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.778 [2024-09-30 12:33:33.450480] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:21.778 [2024-09-30 12:33:33.450533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.778 [2024-09-30 12:33:33.450549] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.778 [2024-09-30 12:33:33.450556] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.778 "name": "raid_bdev1", 00:16:21.778 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:21.778 "strip_size_kb": 64, 00:16:21.778 "state": "online", 00:16:21.778 "raid_level": "raid5f", 00:16:21.778 "superblock": true, 00:16:21.778 "num_base_bdevs": 3, 00:16:21.778 "num_base_bdevs_discovered": 2, 00:16:21.778 "num_base_bdevs_operational": 2, 00:16:21.778 "base_bdevs_list": [ 00:16:21.778 { 00:16:21.778 "name": null, 00:16:21.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.778 "is_configured": false, 00:16:21.778 "data_offset": 0, 00:16:21.778 "data_size": 63488 00:16:21.778 }, 00:16:21.778 { 00:16:21.778 "name": "BaseBdev2", 00:16:21.778 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:21.778 "is_configured": true, 00:16:21.778 "data_offset": 2048, 00:16:21.778 "data_size": 63488 00:16:21.778 }, 00:16:21.778 { 00:16:21.778 "name": "BaseBdev3", 00:16:21.778 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:21.778 "is_configured": true, 00:16:21.778 "data_offset": 2048, 00:16:21.778 "data_size": 63488 00:16:21.778 } 00:16:21.778 ] 00:16:21.778 }' 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.778 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.039 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.039 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.039 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.039 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.039 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.039 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.039 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.039 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.039 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.039 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.299 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.299 "name": "raid_bdev1", 00:16:22.299 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:22.299 "strip_size_kb": 64, 00:16:22.299 "state": "online", 00:16:22.299 "raid_level": "raid5f", 00:16:22.299 "superblock": true, 00:16:22.299 "num_base_bdevs": 3, 00:16:22.299 "num_base_bdevs_discovered": 2, 00:16:22.299 "num_base_bdevs_operational": 2, 00:16:22.299 "base_bdevs_list": [ 00:16:22.299 { 00:16:22.299 "name": null, 00:16:22.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.299 "is_configured": false, 00:16:22.299 "data_offset": 0, 00:16:22.299 "data_size": 63488 00:16:22.299 }, 00:16:22.299 { 00:16:22.299 "name": "BaseBdev2", 00:16:22.299 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:22.299 "is_configured": true, 00:16:22.299 "data_offset": 2048, 00:16:22.299 "data_size": 63488 00:16:22.299 }, 00:16:22.299 { 00:16:22.299 "name": "BaseBdev3", 00:16:22.299 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:22.299 "is_configured": true, 00:16:22.299 "data_offset": 2048, 00:16:22.299 "data_size": 63488 00:16:22.299 } 00:16:22.299 ] 00:16:22.299 }' 00:16:22.299 12:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.299 12:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.299 12:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.299 12:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.299 12:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:22.299 12:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.299 12:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.299 [2024-09-30 12:33:34.047950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.299 [2024-09-30 12:33:34.061489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:22.299 12:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.299 12:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:22.299 [2024-09-30 12:33:34.068438] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:23.240 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.240 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.240 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.240 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.240 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.240 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.240 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.240 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.240 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.240 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.240 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.240 "name": "raid_bdev1", 00:16:23.240 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:23.240 "strip_size_kb": 64, 00:16:23.240 "state": "online", 00:16:23.240 "raid_level": "raid5f", 00:16:23.240 "superblock": true, 00:16:23.240 "num_base_bdevs": 3, 00:16:23.240 "num_base_bdevs_discovered": 3, 00:16:23.240 "num_base_bdevs_operational": 3, 00:16:23.240 "process": { 00:16:23.240 "type": "rebuild", 00:16:23.240 "target": "spare", 00:16:23.240 "progress": { 00:16:23.240 "blocks": 20480, 00:16:23.240 "percent": 16 00:16:23.240 } 00:16:23.240 }, 00:16:23.240 "base_bdevs_list": [ 00:16:23.240 { 00:16:23.240 "name": "spare", 00:16:23.240 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:23.240 "is_configured": true, 00:16:23.240 "data_offset": 2048, 00:16:23.240 "data_size": 63488 00:16:23.240 }, 00:16:23.240 { 00:16:23.241 "name": "BaseBdev2", 00:16:23.241 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:23.241 "is_configured": true, 00:16:23.241 "data_offset": 2048, 00:16:23.241 "data_size": 63488 00:16:23.241 }, 00:16:23.241 { 00:16:23.241 "name": "BaseBdev3", 00:16:23.241 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:23.241 "is_configured": true, 00:16:23.241 "data_offset": 2048, 00:16:23.241 "data_size": 63488 00:16:23.241 } 00:16:23.241 ] 00:16:23.241 }' 00:16:23.241 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:23.502 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=560 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.502 "name": "raid_bdev1", 00:16:23.502 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:23.502 "strip_size_kb": 64, 00:16:23.502 "state": "online", 00:16:23.502 "raid_level": "raid5f", 00:16:23.502 "superblock": true, 00:16:23.502 "num_base_bdevs": 3, 00:16:23.502 "num_base_bdevs_discovered": 3, 00:16:23.502 "num_base_bdevs_operational": 3, 00:16:23.502 "process": { 00:16:23.502 "type": "rebuild", 00:16:23.502 "target": "spare", 00:16:23.502 "progress": { 00:16:23.502 "blocks": 22528, 00:16:23.502 "percent": 17 00:16:23.502 } 00:16:23.502 }, 00:16:23.502 "base_bdevs_list": [ 00:16:23.502 { 00:16:23.502 "name": "spare", 00:16:23.502 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:23.502 "is_configured": true, 00:16:23.502 "data_offset": 2048, 00:16:23.502 "data_size": 63488 00:16:23.502 }, 00:16:23.502 { 00:16:23.502 "name": "BaseBdev2", 00:16:23.502 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:23.502 "is_configured": true, 00:16:23.502 "data_offset": 2048, 00:16:23.502 "data_size": 63488 00:16:23.502 }, 00:16:23.502 { 00:16:23.502 "name": "BaseBdev3", 00:16:23.502 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:23.502 "is_configured": true, 00:16:23.502 "data_offset": 2048, 00:16:23.502 "data_size": 63488 00:16:23.502 } 00:16:23.502 ] 00:16:23.502 }' 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.502 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.503 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.503 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.503 12:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.886 "name": "raid_bdev1", 00:16:24.886 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:24.886 "strip_size_kb": 64, 00:16:24.886 "state": "online", 00:16:24.886 "raid_level": "raid5f", 00:16:24.886 "superblock": true, 00:16:24.886 "num_base_bdevs": 3, 00:16:24.886 "num_base_bdevs_discovered": 3, 00:16:24.886 "num_base_bdevs_operational": 3, 00:16:24.886 "process": { 00:16:24.886 "type": "rebuild", 00:16:24.886 "target": "spare", 00:16:24.886 "progress": { 00:16:24.886 "blocks": 45056, 00:16:24.886 "percent": 35 00:16:24.886 } 00:16:24.886 }, 00:16:24.886 "base_bdevs_list": [ 00:16:24.886 { 00:16:24.886 "name": "spare", 00:16:24.886 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:24.886 "is_configured": true, 00:16:24.886 "data_offset": 2048, 00:16:24.886 "data_size": 63488 00:16:24.886 }, 00:16:24.886 { 00:16:24.886 "name": "BaseBdev2", 00:16:24.886 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:24.886 "is_configured": true, 00:16:24.886 "data_offset": 2048, 00:16:24.886 "data_size": 63488 00:16:24.886 }, 00:16:24.886 { 00:16:24.886 "name": "BaseBdev3", 00:16:24.886 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:24.886 "is_configured": true, 00:16:24.886 "data_offset": 2048, 00:16:24.886 "data_size": 63488 00:16:24.886 } 00:16:24.886 ] 00:16:24.886 }' 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.886 12:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.826 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.826 "name": "raid_bdev1", 00:16:25.827 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:25.827 "strip_size_kb": 64, 00:16:25.827 "state": "online", 00:16:25.827 "raid_level": "raid5f", 00:16:25.827 "superblock": true, 00:16:25.827 "num_base_bdevs": 3, 00:16:25.827 "num_base_bdevs_discovered": 3, 00:16:25.827 "num_base_bdevs_operational": 3, 00:16:25.827 "process": { 00:16:25.827 "type": "rebuild", 00:16:25.827 "target": "spare", 00:16:25.827 "progress": { 00:16:25.827 "blocks": 69632, 00:16:25.827 "percent": 54 00:16:25.827 } 00:16:25.827 }, 00:16:25.827 "base_bdevs_list": [ 00:16:25.827 { 00:16:25.827 "name": "spare", 00:16:25.827 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:25.827 "is_configured": true, 00:16:25.827 "data_offset": 2048, 00:16:25.827 "data_size": 63488 00:16:25.827 }, 00:16:25.827 { 00:16:25.827 "name": "BaseBdev2", 00:16:25.827 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:25.827 "is_configured": true, 00:16:25.827 "data_offset": 2048, 00:16:25.827 "data_size": 63488 00:16:25.827 }, 00:16:25.827 { 00:16:25.827 "name": "BaseBdev3", 00:16:25.827 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:25.827 "is_configured": true, 00:16:25.827 "data_offset": 2048, 00:16:25.827 "data_size": 63488 00:16:25.827 } 00:16:25.827 ] 00:16:25.827 }' 00:16:25.827 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.827 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.827 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.827 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.827 12:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.221 "name": "raid_bdev1", 00:16:27.221 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:27.221 "strip_size_kb": 64, 00:16:27.221 "state": "online", 00:16:27.221 "raid_level": "raid5f", 00:16:27.221 "superblock": true, 00:16:27.221 "num_base_bdevs": 3, 00:16:27.221 "num_base_bdevs_discovered": 3, 00:16:27.221 "num_base_bdevs_operational": 3, 00:16:27.221 "process": { 00:16:27.221 "type": "rebuild", 00:16:27.221 "target": "spare", 00:16:27.221 "progress": { 00:16:27.221 "blocks": 92160, 00:16:27.221 "percent": 72 00:16:27.221 } 00:16:27.221 }, 00:16:27.221 "base_bdevs_list": [ 00:16:27.221 { 00:16:27.221 "name": "spare", 00:16:27.221 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:27.221 "is_configured": true, 00:16:27.221 "data_offset": 2048, 00:16:27.221 "data_size": 63488 00:16:27.221 }, 00:16:27.221 { 00:16:27.221 "name": "BaseBdev2", 00:16:27.221 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:27.221 "is_configured": true, 00:16:27.221 "data_offset": 2048, 00:16:27.221 "data_size": 63488 00:16:27.221 }, 00:16:27.221 { 00:16:27.221 "name": "BaseBdev3", 00:16:27.221 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:27.221 "is_configured": true, 00:16:27.221 "data_offset": 2048, 00:16:27.221 "data_size": 63488 00:16:27.221 } 00:16:27.221 ] 00:16:27.221 }' 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.221 12:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.161 "name": "raid_bdev1", 00:16:28.161 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:28.161 "strip_size_kb": 64, 00:16:28.161 "state": "online", 00:16:28.161 "raid_level": "raid5f", 00:16:28.161 "superblock": true, 00:16:28.161 "num_base_bdevs": 3, 00:16:28.161 "num_base_bdevs_discovered": 3, 00:16:28.161 "num_base_bdevs_operational": 3, 00:16:28.161 "process": { 00:16:28.161 "type": "rebuild", 00:16:28.161 "target": "spare", 00:16:28.161 "progress": { 00:16:28.161 "blocks": 116736, 00:16:28.161 "percent": 91 00:16:28.161 } 00:16:28.161 }, 00:16:28.161 "base_bdevs_list": [ 00:16:28.161 { 00:16:28.161 "name": "spare", 00:16:28.161 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:28.161 "is_configured": true, 00:16:28.161 "data_offset": 2048, 00:16:28.161 "data_size": 63488 00:16:28.161 }, 00:16:28.161 { 00:16:28.161 "name": "BaseBdev2", 00:16:28.161 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:28.161 "is_configured": true, 00:16:28.161 "data_offset": 2048, 00:16:28.161 "data_size": 63488 00:16:28.161 }, 00:16:28.161 { 00:16:28.161 "name": "BaseBdev3", 00:16:28.161 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:28.161 "is_configured": true, 00:16:28.161 "data_offset": 2048, 00:16:28.161 "data_size": 63488 00:16:28.161 } 00:16:28.161 ] 00:16:28.161 }' 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.161 12:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.421 [2024-09-30 12:33:40.300499] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:28.421 [2024-09-30 12:33:40.300567] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:28.421 [2024-09-30 12:33:40.300656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.360 12:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.360 12:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.360 12:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.360 12:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.360 12:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.360 12:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.360 12:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.360 12:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.360 12:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.360 12:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.360 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.360 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.360 "name": "raid_bdev1", 00:16:29.360 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:29.360 "strip_size_kb": 64, 00:16:29.360 "state": "online", 00:16:29.360 "raid_level": "raid5f", 00:16:29.360 "superblock": true, 00:16:29.360 "num_base_bdevs": 3, 00:16:29.360 "num_base_bdevs_discovered": 3, 00:16:29.360 "num_base_bdevs_operational": 3, 00:16:29.360 "base_bdevs_list": [ 00:16:29.360 { 00:16:29.360 "name": "spare", 00:16:29.360 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:29.360 "is_configured": true, 00:16:29.360 "data_offset": 2048, 00:16:29.360 "data_size": 63488 00:16:29.360 }, 00:16:29.360 { 00:16:29.360 "name": "BaseBdev2", 00:16:29.360 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:29.360 "is_configured": true, 00:16:29.360 "data_offset": 2048, 00:16:29.360 "data_size": 63488 00:16:29.360 }, 00:16:29.360 { 00:16:29.360 "name": "BaseBdev3", 00:16:29.360 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:29.361 "is_configured": true, 00:16:29.361 "data_offset": 2048, 00:16:29.361 "data_size": 63488 00:16:29.361 } 00:16:29.361 ] 00:16:29.361 }' 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.361 "name": "raid_bdev1", 00:16:29.361 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:29.361 "strip_size_kb": 64, 00:16:29.361 "state": "online", 00:16:29.361 "raid_level": "raid5f", 00:16:29.361 "superblock": true, 00:16:29.361 "num_base_bdevs": 3, 00:16:29.361 "num_base_bdevs_discovered": 3, 00:16:29.361 "num_base_bdevs_operational": 3, 00:16:29.361 "base_bdevs_list": [ 00:16:29.361 { 00:16:29.361 "name": "spare", 00:16:29.361 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:29.361 "is_configured": true, 00:16:29.361 "data_offset": 2048, 00:16:29.361 "data_size": 63488 00:16:29.361 }, 00:16:29.361 { 00:16:29.361 "name": "BaseBdev2", 00:16:29.361 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:29.361 "is_configured": true, 00:16:29.361 "data_offset": 2048, 00:16:29.361 "data_size": 63488 00:16:29.361 }, 00:16:29.361 { 00:16:29.361 "name": "BaseBdev3", 00:16:29.361 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:29.361 "is_configured": true, 00:16:29.361 "data_offset": 2048, 00:16:29.361 "data_size": 63488 00:16:29.361 } 00:16:29.361 ] 00:16:29.361 }' 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.361 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.621 "name": "raid_bdev1", 00:16:29.621 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:29.621 "strip_size_kb": 64, 00:16:29.621 "state": "online", 00:16:29.621 "raid_level": "raid5f", 00:16:29.621 "superblock": true, 00:16:29.621 "num_base_bdevs": 3, 00:16:29.621 "num_base_bdevs_discovered": 3, 00:16:29.621 "num_base_bdevs_operational": 3, 00:16:29.621 "base_bdevs_list": [ 00:16:29.621 { 00:16:29.621 "name": "spare", 00:16:29.621 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:29.621 "is_configured": true, 00:16:29.621 "data_offset": 2048, 00:16:29.621 "data_size": 63488 00:16:29.621 }, 00:16:29.621 { 00:16:29.621 "name": "BaseBdev2", 00:16:29.621 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:29.621 "is_configured": true, 00:16:29.621 "data_offset": 2048, 00:16:29.621 "data_size": 63488 00:16:29.621 }, 00:16:29.621 { 00:16:29.621 "name": "BaseBdev3", 00:16:29.621 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:29.621 "is_configured": true, 00:16:29.621 "data_offset": 2048, 00:16:29.621 "data_size": 63488 00:16:29.621 } 00:16:29.621 ] 00:16:29.621 }' 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.621 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.881 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:29.881 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.881 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.881 [2024-09-30 12:33:41.725028] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.881 [2024-09-30 12:33:41.725059] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.881 [2024-09-30 12:33:41.725129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.881 [2024-09-30 12:33:41.725199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.881 [2024-09-30 12:33:41.725213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:29.881 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.881 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.881 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.882 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.882 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:29.882 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.141 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:30.141 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:30.141 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:30.142 /dev/nbd0 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:30.142 12:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.142 1+0 records in 00:16:30.142 1+0 records out 00:16:30.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312528 s, 13.1 MB/s 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.142 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:30.401 /dev/nbd1 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.401 1+0 records in 00:16:30.401 1+0 records out 00:16:30.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490847 s, 8.3 MB/s 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.401 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:30.661 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:30.661 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.661 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.661 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:30.661 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:30.661 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.661 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:30.920 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:30.920 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:30.920 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:30.920 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.920 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.920 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:30.920 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:30.920 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.920 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.920 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.180 [2024-09-30 12:33:42.871400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:31.180 [2024-09-30 12:33:42.871480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.180 [2024-09-30 12:33:42.871500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:31.180 [2024-09-30 12:33:42.871510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.180 [2024-09-30 12:33:42.873488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.180 [2024-09-30 12:33:42.873529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:31.180 [2024-09-30 12:33:42.873601] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:31.180 [2024-09-30 12:33:42.873670] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:31.180 [2024-09-30 12:33:42.873827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.180 [2024-09-30 12:33:42.873925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.180 spare 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.180 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.180 [2024-09-30 12:33:42.973810] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:31.180 [2024-09-30 12:33:42.973839] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:31.180 [2024-09-30 12:33:42.974071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:31.181 [2024-09-30 12:33:42.979016] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:31.181 [2024-09-30 12:33:42.979038] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:31.181 [2024-09-30 12:33:42.979203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.181 12:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.181 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.181 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.181 "name": "raid_bdev1", 00:16:31.181 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:31.181 "strip_size_kb": 64, 00:16:31.181 "state": "online", 00:16:31.181 "raid_level": "raid5f", 00:16:31.181 "superblock": true, 00:16:31.181 "num_base_bdevs": 3, 00:16:31.181 "num_base_bdevs_discovered": 3, 00:16:31.181 "num_base_bdevs_operational": 3, 00:16:31.181 "base_bdevs_list": [ 00:16:31.181 { 00:16:31.181 "name": "spare", 00:16:31.181 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:31.181 "is_configured": true, 00:16:31.181 "data_offset": 2048, 00:16:31.181 "data_size": 63488 00:16:31.181 }, 00:16:31.181 { 00:16:31.181 "name": "BaseBdev2", 00:16:31.181 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:31.181 "is_configured": true, 00:16:31.181 "data_offset": 2048, 00:16:31.181 "data_size": 63488 00:16:31.181 }, 00:16:31.181 { 00:16:31.181 "name": "BaseBdev3", 00:16:31.181 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:31.181 "is_configured": true, 00:16:31.181 "data_offset": 2048, 00:16:31.181 "data_size": 63488 00:16:31.181 } 00:16:31.181 ] 00:16:31.181 }' 00:16:31.181 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.181 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.775 "name": "raid_bdev1", 00:16:31.775 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:31.775 "strip_size_kb": 64, 00:16:31.775 "state": "online", 00:16:31.775 "raid_level": "raid5f", 00:16:31.775 "superblock": true, 00:16:31.775 "num_base_bdevs": 3, 00:16:31.775 "num_base_bdevs_discovered": 3, 00:16:31.775 "num_base_bdevs_operational": 3, 00:16:31.775 "base_bdevs_list": [ 00:16:31.775 { 00:16:31.775 "name": "spare", 00:16:31.775 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:31.775 "is_configured": true, 00:16:31.775 "data_offset": 2048, 00:16:31.775 "data_size": 63488 00:16:31.775 }, 00:16:31.775 { 00:16:31.775 "name": "BaseBdev2", 00:16:31.775 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:31.775 "is_configured": true, 00:16:31.775 "data_offset": 2048, 00:16:31.775 "data_size": 63488 00:16:31.775 }, 00:16:31.775 { 00:16:31.775 "name": "BaseBdev3", 00:16:31.775 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:31.775 "is_configured": true, 00:16:31.775 "data_offset": 2048, 00:16:31.775 "data_size": 63488 00:16:31.775 } 00:16:31.775 ] 00:16:31.775 }' 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.775 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.035 [2024-09-30 12:33:43.680089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.035 "name": "raid_bdev1", 00:16:32.035 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:32.035 "strip_size_kb": 64, 00:16:32.035 "state": "online", 00:16:32.035 "raid_level": "raid5f", 00:16:32.035 "superblock": true, 00:16:32.035 "num_base_bdevs": 3, 00:16:32.035 "num_base_bdevs_discovered": 2, 00:16:32.035 "num_base_bdevs_operational": 2, 00:16:32.035 "base_bdevs_list": [ 00:16:32.035 { 00:16:32.035 "name": null, 00:16:32.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.035 "is_configured": false, 00:16:32.035 "data_offset": 0, 00:16:32.035 "data_size": 63488 00:16:32.035 }, 00:16:32.035 { 00:16:32.035 "name": "BaseBdev2", 00:16:32.035 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:32.035 "is_configured": true, 00:16:32.035 "data_offset": 2048, 00:16:32.035 "data_size": 63488 00:16:32.035 }, 00:16:32.035 { 00:16:32.035 "name": "BaseBdev3", 00:16:32.035 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:32.035 "is_configured": true, 00:16:32.035 "data_offset": 2048, 00:16:32.035 "data_size": 63488 00:16:32.035 } 00:16:32.035 ] 00:16:32.035 }' 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.035 12:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.295 12:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.295 12:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.295 12:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.295 [2024-09-30 12:33:44.131527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.295 [2024-09-30 12:33:44.131644] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:32.295 [2024-09-30 12:33:44.131660] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:32.295 [2024-09-30 12:33:44.131699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.295 [2024-09-30 12:33:44.145031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:32.295 12:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.295 12:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:32.295 [2024-09-30 12:33:44.151776] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.679 "name": "raid_bdev1", 00:16:33.679 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:33.679 "strip_size_kb": 64, 00:16:33.679 "state": "online", 00:16:33.679 "raid_level": "raid5f", 00:16:33.679 "superblock": true, 00:16:33.679 "num_base_bdevs": 3, 00:16:33.679 "num_base_bdevs_discovered": 3, 00:16:33.679 "num_base_bdevs_operational": 3, 00:16:33.679 "process": { 00:16:33.679 "type": "rebuild", 00:16:33.679 "target": "spare", 00:16:33.679 "progress": { 00:16:33.679 "blocks": 20480, 00:16:33.679 "percent": 16 00:16:33.679 } 00:16:33.679 }, 00:16:33.679 "base_bdevs_list": [ 00:16:33.679 { 00:16:33.679 "name": "spare", 00:16:33.679 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:33.679 "is_configured": true, 00:16:33.679 "data_offset": 2048, 00:16:33.679 "data_size": 63488 00:16:33.679 }, 00:16:33.679 { 00:16:33.679 "name": "BaseBdev2", 00:16:33.679 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:33.679 "is_configured": true, 00:16:33.679 "data_offset": 2048, 00:16:33.679 "data_size": 63488 00:16:33.679 }, 00:16:33.679 { 00:16:33.679 "name": "BaseBdev3", 00:16:33.679 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:33.679 "is_configured": true, 00:16:33.679 "data_offset": 2048, 00:16:33.679 "data_size": 63488 00:16:33.679 } 00:16:33.679 ] 00:16:33.679 }' 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.679 [2024-09-30 12:33:45.286692] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.679 [2024-09-30 12:33:45.358759] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:33.679 [2024-09-30 12:33:45.358811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.679 [2024-09-30 12:33:45.358824] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.679 [2024-09-30 12:33:45.358833] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.679 "name": "raid_bdev1", 00:16:33.679 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:33.679 "strip_size_kb": 64, 00:16:33.679 "state": "online", 00:16:33.679 "raid_level": "raid5f", 00:16:33.679 "superblock": true, 00:16:33.679 "num_base_bdevs": 3, 00:16:33.679 "num_base_bdevs_discovered": 2, 00:16:33.679 "num_base_bdevs_operational": 2, 00:16:33.679 "base_bdevs_list": [ 00:16:33.679 { 00:16:33.679 "name": null, 00:16:33.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.679 "is_configured": false, 00:16:33.679 "data_offset": 0, 00:16:33.679 "data_size": 63488 00:16:33.679 }, 00:16:33.679 { 00:16:33.679 "name": "BaseBdev2", 00:16:33.679 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:33.679 "is_configured": true, 00:16:33.679 "data_offset": 2048, 00:16:33.679 "data_size": 63488 00:16:33.679 }, 00:16:33.679 { 00:16:33.679 "name": "BaseBdev3", 00:16:33.679 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:33.679 "is_configured": true, 00:16:33.679 "data_offset": 2048, 00:16:33.679 "data_size": 63488 00:16:33.679 } 00:16:33.679 ] 00:16:33.679 }' 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.679 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.249 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.249 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.249 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.249 [2024-09-30 12:33:45.857012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.249 [2024-09-30 12:33:45.857067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.249 [2024-09-30 12:33:45.857086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:34.249 [2024-09-30 12:33:45.857099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.249 [2024-09-30 12:33:45.857522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.249 [2024-09-30 12:33:45.857552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.249 [2024-09-30 12:33:45.857629] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:34.249 [2024-09-30 12:33:45.857650] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:34.249 [2024-09-30 12:33:45.857659] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:34.249 [2024-09-30 12:33:45.857679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.249 [2024-09-30 12:33:45.871150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:34.249 spare 00:16:34.249 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.249 12:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:34.249 [2024-09-30 12:33:45.878029] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:35.189 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.189 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.189 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.189 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.189 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.189 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.189 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.189 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.189 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.189 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.189 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.189 "name": "raid_bdev1", 00:16:35.189 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:35.189 "strip_size_kb": 64, 00:16:35.189 "state": "online", 00:16:35.189 "raid_level": "raid5f", 00:16:35.189 "superblock": true, 00:16:35.189 "num_base_bdevs": 3, 00:16:35.189 "num_base_bdevs_discovered": 3, 00:16:35.189 "num_base_bdevs_operational": 3, 00:16:35.189 "process": { 00:16:35.189 "type": "rebuild", 00:16:35.189 "target": "spare", 00:16:35.189 "progress": { 00:16:35.189 "blocks": 20480, 00:16:35.189 "percent": 16 00:16:35.189 } 00:16:35.189 }, 00:16:35.189 "base_bdevs_list": [ 00:16:35.189 { 00:16:35.189 "name": "spare", 00:16:35.189 "uuid": "63d1dc76-92a9-5733-a754-266d82bcadef", 00:16:35.189 "is_configured": true, 00:16:35.189 "data_offset": 2048, 00:16:35.189 "data_size": 63488 00:16:35.189 }, 00:16:35.189 { 00:16:35.189 "name": "BaseBdev2", 00:16:35.189 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:35.189 "is_configured": true, 00:16:35.189 "data_offset": 2048, 00:16:35.189 "data_size": 63488 00:16:35.189 }, 00:16:35.189 { 00:16:35.189 "name": "BaseBdev3", 00:16:35.189 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:35.189 "is_configured": true, 00:16:35.189 "data_offset": 2048, 00:16:35.189 "data_size": 63488 00:16:35.190 } 00:16:35.190 ] 00:16:35.190 }' 00:16:35.190 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.190 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.190 12:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.190 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.190 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:35.190 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.190 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.190 [2024-09-30 12:33:47.020881] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.450 [2024-09-30 12:33:47.084876] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:35.450 [2024-09-30 12:33:47.084923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.450 [2024-09-30 12:33:47.084939] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.450 [2024-09-30 12:33:47.084945] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.450 "name": "raid_bdev1", 00:16:35.450 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:35.450 "strip_size_kb": 64, 00:16:35.450 "state": "online", 00:16:35.450 "raid_level": "raid5f", 00:16:35.450 "superblock": true, 00:16:35.450 "num_base_bdevs": 3, 00:16:35.450 "num_base_bdevs_discovered": 2, 00:16:35.450 "num_base_bdevs_operational": 2, 00:16:35.450 "base_bdevs_list": [ 00:16:35.450 { 00:16:35.450 "name": null, 00:16:35.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.450 "is_configured": false, 00:16:35.450 "data_offset": 0, 00:16:35.450 "data_size": 63488 00:16:35.450 }, 00:16:35.450 { 00:16:35.450 "name": "BaseBdev2", 00:16:35.450 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:35.450 "is_configured": true, 00:16:35.450 "data_offset": 2048, 00:16:35.450 "data_size": 63488 00:16:35.450 }, 00:16:35.450 { 00:16:35.450 "name": "BaseBdev3", 00:16:35.450 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:35.450 "is_configured": true, 00:16:35.450 "data_offset": 2048, 00:16:35.450 "data_size": 63488 00:16:35.450 } 00:16:35.450 ] 00:16:35.450 }' 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.450 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.710 "name": "raid_bdev1", 00:16:35.710 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:35.710 "strip_size_kb": 64, 00:16:35.710 "state": "online", 00:16:35.710 "raid_level": "raid5f", 00:16:35.710 "superblock": true, 00:16:35.710 "num_base_bdevs": 3, 00:16:35.710 "num_base_bdevs_discovered": 2, 00:16:35.710 "num_base_bdevs_operational": 2, 00:16:35.710 "base_bdevs_list": [ 00:16:35.710 { 00:16:35.710 "name": null, 00:16:35.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.710 "is_configured": false, 00:16:35.710 "data_offset": 0, 00:16:35.710 "data_size": 63488 00:16:35.710 }, 00:16:35.710 { 00:16:35.710 "name": "BaseBdev2", 00:16:35.710 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:35.710 "is_configured": true, 00:16:35.710 "data_offset": 2048, 00:16:35.710 "data_size": 63488 00:16:35.710 }, 00:16:35.710 { 00:16:35.710 "name": "BaseBdev3", 00:16:35.710 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:35.710 "is_configured": true, 00:16:35.710 "data_offset": 2048, 00:16:35.710 "data_size": 63488 00:16:35.710 } 00:16:35.710 ] 00:16:35.710 }' 00:16:35.710 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.971 [2024-09-30 12:33:47.698511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:35.971 [2024-09-30 12:33:47.698559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.971 [2024-09-30 12:33:47.698581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:35.971 [2024-09-30 12:33:47.698590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.971 [2024-09-30 12:33:47.698991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.971 [2024-09-30 12:33:47.699016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:35.971 [2024-09-30 12:33:47.699083] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:35.971 [2024-09-30 12:33:47.699096] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:35.971 [2024-09-30 12:33:47.699107] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:35.971 [2024-09-30 12:33:47.699119] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:35.971 BaseBdev1 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.971 12:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.911 "name": "raid_bdev1", 00:16:36.911 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:36.911 "strip_size_kb": 64, 00:16:36.911 "state": "online", 00:16:36.911 "raid_level": "raid5f", 00:16:36.911 "superblock": true, 00:16:36.911 "num_base_bdevs": 3, 00:16:36.911 "num_base_bdevs_discovered": 2, 00:16:36.911 "num_base_bdevs_operational": 2, 00:16:36.911 "base_bdevs_list": [ 00:16:36.911 { 00:16:36.911 "name": null, 00:16:36.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.911 "is_configured": false, 00:16:36.911 "data_offset": 0, 00:16:36.911 "data_size": 63488 00:16:36.911 }, 00:16:36.911 { 00:16:36.911 "name": "BaseBdev2", 00:16:36.911 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:36.911 "is_configured": true, 00:16:36.911 "data_offset": 2048, 00:16:36.911 "data_size": 63488 00:16:36.911 }, 00:16:36.911 { 00:16:36.911 "name": "BaseBdev3", 00:16:36.911 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:36.911 "is_configured": true, 00:16:36.911 "data_offset": 2048, 00:16:36.911 "data_size": 63488 00:16:36.911 } 00:16:36.911 ] 00:16:36.911 }' 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.911 12:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.481 "name": "raid_bdev1", 00:16:37.481 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:37.481 "strip_size_kb": 64, 00:16:37.481 "state": "online", 00:16:37.481 "raid_level": "raid5f", 00:16:37.481 "superblock": true, 00:16:37.481 "num_base_bdevs": 3, 00:16:37.481 "num_base_bdevs_discovered": 2, 00:16:37.481 "num_base_bdevs_operational": 2, 00:16:37.481 "base_bdevs_list": [ 00:16:37.481 { 00:16:37.481 "name": null, 00:16:37.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.481 "is_configured": false, 00:16:37.481 "data_offset": 0, 00:16:37.481 "data_size": 63488 00:16:37.481 }, 00:16:37.481 { 00:16:37.481 "name": "BaseBdev2", 00:16:37.481 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:37.481 "is_configured": true, 00:16:37.481 "data_offset": 2048, 00:16:37.481 "data_size": 63488 00:16:37.481 }, 00:16:37.481 { 00:16:37.481 "name": "BaseBdev3", 00:16:37.481 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:37.481 "is_configured": true, 00:16:37.481 "data_offset": 2048, 00:16:37.481 "data_size": 63488 00:16:37.481 } 00:16:37.481 ] 00:16:37.481 }' 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.481 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.481 [2024-09-30 12:33:49.351827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.481 [2024-09-30 12:33:49.351943] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.481 [2024-09-30 12:33:49.351958] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:37.481 request: 00:16:37.481 { 00:16:37.482 "base_bdev": "BaseBdev1", 00:16:37.482 "raid_bdev": "raid_bdev1", 00:16:37.482 "method": "bdev_raid_add_base_bdev", 00:16:37.482 "req_id": 1 00:16:37.482 } 00:16:37.482 Got JSON-RPC error response 00:16:37.482 response: 00:16:37.482 { 00:16:37.482 "code": -22, 00:16:37.482 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:37.482 } 00:16:37.482 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:37.482 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:37.482 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:37.482 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:37.482 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:37.482 12:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.879 "name": "raid_bdev1", 00:16:38.879 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:38.879 "strip_size_kb": 64, 00:16:38.879 "state": "online", 00:16:38.879 "raid_level": "raid5f", 00:16:38.879 "superblock": true, 00:16:38.879 "num_base_bdevs": 3, 00:16:38.879 "num_base_bdevs_discovered": 2, 00:16:38.879 "num_base_bdevs_operational": 2, 00:16:38.879 "base_bdevs_list": [ 00:16:38.879 { 00:16:38.879 "name": null, 00:16:38.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.879 "is_configured": false, 00:16:38.879 "data_offset": 0, 00:16:38.879 "data_size": 63488 00:16:38.879 }, 00:16:38.879 { 00:16:38.879 "name": "BaseBdev2", 00:16:38.879 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:38.879 "is_configured": true, 00:16:38.879 "data_offset": 2048, 00:16:38.879 "data_size": 63488 00:16:38.879 }, 00:16:38.879 { 00:16:38.879 "name": "BaseBdev3", 00:16:38.879 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:38.879 "is_configured": true, 00:16:38.879 "data_offset": 2048, 00:16:38.879 "data_size": 63488 00:16:38.879 } 00:16:38.879 ] 00:16:38.879 }' 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.879 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.166 "name": "raid_bdev1", 00:16:39.166 "uuid": "6d12a381-39da-4e36-b7f9-711fd54ff065", 00:16:39.166 "strip_size_kb": 64, 00:16:39.166 "state": "online", 00:16:39.166 "raid_level": "raid5f", 00:16:39.166 "superblock": true, 00:16:39.166 "num_base_bdevs": 3, 00:16:39.166 "num_base_bdevs_discovered": 2, 00:16:39.166 "num_base_bdevs_operational": 2, 00:16:39.166 "base_bdevs_list": [ 00:16:39.166 { 00:16:39.166 "name": null, 00:16:39.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.166 "is_configured": false, 00:16:39.166 "data_offset": 0, 00:16:39.166 "data_size": 63488 00:16:39.166 }, 00:16:39.166 { 00:16:39.166 "name": "BaseBdev2", 00:16:39.166 "uuid": "6d1079e1-f254-5d92-8269-0f0db86a0bb9", 00:16:39.166 "is_configured": true, 00:16:39.166 "data_offset": 2048, 00:16:39.166 "data_size": 63488 00:16:39.166 }, 00:16:39.166 { 00:16:39.166 "name": "BaseBdev3", 00:16:39.166 "uuid": "f114e00e-f3b3-569c-a650-2aff9dca5308", 00:16:39.166 "is_configured": true, 00:16:39.166 "data_offset": 2048, 00:16:39.166 "data_size": 63488 00:16:39.166 } 00:16:39.166 ] 00:16:39.166 }' 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81857 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81857 ']' 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 81857 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81857 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:39.166 killing process with pid 81857 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81857' 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 81857 00:16:39.166 Received shutdown signal, test time was about 60.000000 seconds 00:16:39.166 00:16:39.166 Latency(us) 00:16:39.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.166 =================================================================================================================== 00:16:39.166 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:39.166 [2024-09-30 12:33:50.980091] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.166 [2024-09-30 12:33:50.980185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.166 [2024-09-30 12:33:50.980235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.166 [2024-09-30 12:33:50.980246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:39.166 12:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 81857 00:16:39.736 [2024-09-30 12:33:51.347759] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.675 12:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:40.675 00:16:40.675 real 0m23.433s 00:16:40.675 user 0m29.883s 00:16:40.675 sys 0m3.076s 00:16:40.675 12:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:40.675 12:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.675 ************************************ 00:16:40.675 END TEST raid5f_rebuild_test_sb 00:16:40.675 ************************************ 00:16:40.936 12:33:52 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:40.936 12:33:52 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:40.936 12:33:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:40.936 12:33:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:40.936 12:33:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.936 ************************************ 00:16:40.936 START TEST raid5f_state_function_test 00:16:40.936 ************************************ 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82611 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82611' 00:16:40.936 Process raid pid: 82611 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82611 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82611 ']' 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.936 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.936 [2024-09-30 12:33:52.699405] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:40.936 [2024-09-30 12:33:52.700045] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.196 [2024-09-30 12:33:52.864016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.196 [2024-09-30 12:33:53.055799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.455 [2024-09-30 12:33:53.251350] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.455 [2024-09-30 12:33:53.251382] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.715 [2024-09-30 12:33:53.511623] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.715 [2024-09-30 12:33:53.511677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.715 [2024-09-30 12:33:53.511687] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.715 [2024-09-30 12:33:53.511696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.715 [2024-09-30 12:33:53.511703] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:41.715 [2024-09-30 12:33:53.511711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.715 [2024-09-30 12:33:53.511717] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:41.715 [2024-09-30 12:33:53.511727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.715 "name": "Existed_Raid", 00:16:41.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.715 "strip_size_kb": 64, 00:16:41.715 "state": "configuring", 00:16:41.715 "raid_level": "raid5f", 00:16:41.715 "superblock": false, 00:16:41.715 "num_base_bdevs": 4, 00:16:41.715 "num_base_bdevs_discovered": 0, 00:16:41.715 "num_base_bdevs_operational": 4, 00:16:41.715 "base_bdevs_list": [ 00:16:41.715 { 00:16:41.715 "name": "BaseBdev1", 00:16:41.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.715 "is_configured": false, 00:16:41.715 "data_offset": 0, 00:16:41.715 "data_size": 0 00:16:41.715 }, 00:16:41.715 { 00:16:41.715 "name": "BaseBdev2", 00:16:41.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.715 "is_configured": false, 00:16:41.715 "data_offset": 0, 00:16:41.715 "data_size": 0 00:16:41.715 }, 00:16:41.715 { 00:16:41.715 "name": "BaseBdev3", 00:16:41.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.715 "is_configured": false, 00:16:41.715 "data_offset": 0, 00:16:41.715 "data_size": 0 00:16:41.715 }, 00:16:41.715 { 00:16:41.715 "name": "BaseBdev4", 00:16:41.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.715 "is_configured": false, 00:16:41.715 "data_offset": 0, 00:16:41.715 "data_size": 0 00:16:41.715 } 00:16:41.715 ] 00:16:41.715 }' 00:16:41.715 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.716 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.285 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.286 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.286 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.286 [2024-09-30 12:33:53.934861] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.286 [2024-09-30 12:33:53.934898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:42.286 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.286 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:42.286 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.286 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.286 [2024-09-30 12:33:53.942893] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.286 [2024-09-30 12:33:53.942929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.286 [2024-09-30 12:33:53.942936] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.286 [2024-09-30 12:33:53.942945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.286 [2024-09-30 12:33:53.942952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.286 [2024-09-30 12:33:53.942960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.286 [2024-09-30 12:33:53.942966] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:42.286 [2024-09-30 12:33:53.942974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:42.286 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.286 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:42.286 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.286 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.286 [2024-09-30 12:33:54.013574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.286 BaseBdev1 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.286 [ 00:16:42.286 { 00:16:42.286 "name": "BaseBdev1", 00:16:42.286 "aliases": [ 00:16:42.286 "76bbbac0-fcd8-42bb-9145-b0472ae95605" 00:16:42.286 ], 00:16:42.286 "product_name": "Malloc disk", 00:16:42.286 "block_size": 512, 00:16:42.286 "num_blocks": 65536, 00:16:42.286 "uuid": "76bbbac0-fcd8-42bb-9145-b0472ae95605", 00:16:42.286 "assigned_rate_limits": { 00:16:42.286 "rw_ios_per_sec": 0, 00:16:42.286 "rw_mbytes_per_sec": 0, 00:16:42.286 "r_mbytes_per_sec": 0, 00:16:42.286 "w_mbytes_per_sec": 0 00:16:42.286 }, 00:16:42.286 "claimed": true, 00:16:42.286 "claim_type": "exclusive_write", 00:16:42.286 "zoned": false, 00:16:42.286 "supported_io_types": { 00:16:42.286 "read": true, 00:16:42.286 "write": true, 00:16:42.286 "unmap": true, 00:16:42.286 "flush": true, 00:16:42.286 "reset": true, 00:16:42.286 "nvme_admin": false, 00:16:42.286 "nvme_io": false, 00:16:42.286 "nvme_io_md": false, 00:16:42.286 "write_zeroes": true, 00:16:42.286 "zcopy": true, 00:16:42.286 "get_zone_info": false, 00:16:42.286 "zone_management": false, 00:16:42.286 "zone_append": false, 00:16:42.286 "compare": false, 00:16:42.286 "compare_and_write": false, 00:16:42.286 "abort": true, 00:16:42.286 "seek_hole": false, 00:16:42.286 "seek_data": false, 00:16:42.286 "copy": true, 00:16:42.286 "nvme_iov_md": false 00:16:42.286 }, 00:16:42.286 "memory_domains": [ 00:16:42.286 { 00:16:42.286 "dma_device_id": "system", 00:16:42.286 "dma_device_type": 1 00:16:42.286 }, 00:16:42.286 { 00:16:42.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.286 "dma_device_type": 2 00:16:42.286 } 00:16:42.286 ], 00:16:42.286 "driver_specific": {} 00:16:42.286 } 00:16:42.286 ] 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.286 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.286 "name": "Existed_Raid", 00:16:42.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.286 "strip_size_kb": 64, 00:16:42.286 "state": "configuring", 00:16:42.286 "raid_level": "raid5f", 00:16:42.286 "superblock": false, 00:16:42.286 "num_base_bdevs": 4, 00:16:42.286 "num_base_bdevs_discovered": 1, 00:16:42.286 "num_base_bdevs_operational": 4, 00:16:42.286 "base_bdevs_list": [ 00:16:42.286 { 00:16:42.286 "name": "BaseBdev1", 00:16:42.286 "uuid": "76bbbac0-fcd8-42bb-9145-b0472ae95605", 00:16:42.286 "is_configured": true, 00:16:42.286 "data_offset": 0, 00:16:42.286 "data_size": 65536 00:16:42.286 }, 00:16:42.286 { 00:16:42.286 "name": "BaseBdev2", 00:16:42.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.286 "is_configured": false, 00:16:42.286 "data_offset": 0, 00:16:42.286 "data_size": 0 00:16:42.286 }, 00:16:42.286 { 00:16:42.286 "name": "BaseBdev3", 00:16:42.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.286 "is_configured": false, 00:16:42.286 "data_offset": 0, 00:16:42.286 "data_size": 0 00:16:42.287 }, 00:16:42.287 { 00:16:42.287 "name": "BaseBdev4", 00:16:42.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.287 "is_configured": false, 00:16:42.287 "data_offset": 0, 00:16:42.287 "data_size": 0 00:16:42.287 } 00:16:42.287 ] 00:16:42.287 }' 00:16:42.287 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.287 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.856 [2024-09-30 12:33:54.544659] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.856 [2024-09-30 12:33:54.544698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.856 [2024-09-30 12:33:54.556677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.856 [2024-09-30 12:33:54.558335] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.856 [2024-09-30 12:33:54.558374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.856 [2024-09-30 12:33:54.558382] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.856 [2024-09-30 12:33:54.558392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.856 [2024-09-30 12:33:54.558398] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:42.856 [2024-09-30 12:33:54.558406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.856 "name": "Existed_Raid", 00:16:42.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.856 "strip_size_kb": 64, 00:16:42.856 "state": "configuring", 00:16:42.856 "raid_level": "raid5f", 00:16:42.856 "superblock": false, 00:16:42.856 "num_base_bdevs": 4, 00:16:42.856 "num_base_bdevs_discovered": 1, 00:16:42.856 "num_base_bdevs_operational": 4, 00:16:42.856 "base_bdevs_list": [ 00:16:42.856 { 00:16:42.856 "name": "BaseBdev1", 00:16:42.856 "uuid": "76bbbac0-fcd8-42bb-9145-b0472ae95605", 00:16:42.856 "is_configured": true, 00:16:42.856 "data_offset": 0, 00:16:42.856 "data_size": 65536 00:16:42.856 }, 00:16:42.856 { 00:16:42.856 "name": "BaseBdev2", 00:16:42.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.856 "is_configured": false, 00:16:42.856 "data_offset": 0, 00:16:42.856 "data_size": 0 00:16:42.856 }, 00:16:42.856 { 00:16:42.856 "name": "BaseBdev3", 00:16:42.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.856 "is_configured": false, 00:16:42.856 "data_offset": 0, 00:16:42.856 "data_size": 0 00:16:42.856 }, 00:16:42.856 { 00:16:42.856 "name": "BaseBdev4", 00:16:42.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.856 "is_configured": false, 00:16:42.856 "data_offset": 0, 00:16:42.856 "data_size": 0 00:16:42.856 } 00:16:42.856 ] 00:16:42.856 }' 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.856 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.116 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:43.116 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.116 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.116 [2024-09-30 12:33:55.003361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.116 BaseBdev2 00:16:43.116 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.116 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:43.116 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:43.116 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:43.116 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:43.116 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:43.117 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:43.117 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:43.117 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.117 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.377 [ 00:16:43.377 { 00:16:43.377 "name": "BaseBdev2", 00:16:43.377 "aliases": [ 00:16:43.377 "4957606f-163f-4c14-88fa-b0da6321db5c" 00:16:43.377 ], 00:16:43.377 "product_name": "Malloc disk", 00:16:43.377 "block_size": 512, 00:16:43.377 "num_blocks": 65536, 00:16:43.377 "uuid": "4957606f-163f-4c14-88fa-b0da6321db5c", 00:16:43.377 "assigned_rate_limits": { 00:16:43.377 "rw_ios_per_sec": 0, 00:16:43.377 "rw_mbytes_per_sec": 0, 00:16:43.377 "r_mbytes_per_sec": 0, 00:16:43.377 "w_mbytes_per_sec": 0 00:16:43.377 }, 00:16:43.377 "claimed": true, 00:16:43.377 "claim_type": "exclusive_write", 00:16:43.377 "zoned": false, 00:16:43.377 "supported_io_types": { 00:16:43.377 "read": true, 00:16:43.377 "write": true, 00:16:43.377 "unmap": true, 00:16:43.377 "flush": true, 00:16:43.377 "reset": true, 00:16:43.377 "nvme_admin": false, 00:16:43.377 "nvme_io": false, 00:16:43.377 "nvme_io_md": false, 00:16:43.377 "write_zeroes": true, 00:16:43.377 "zcopy": true, 00:16:43.377 "get_zone_info": false, 00:16:43.377 "zone_management": false, 00:16:43.377 "zone_append": false, 00:16:43.377 "compare": false, 00:16:43.377 "compare_and_write": false, 00:16:43.377 "abort": true, 00:16:43.377 "seek_hole": false, 00:16:43.377 "seek_data": false, 00:16:43.377 "copy": true, 00:16:43.377 "nvme_iov_md": false 00:16:43.377 }, 00:16:43.377 "memory_domains": [ 00:16:43.377 { 00:16:43.377 "dma_device_id": "system", 00:16:43.377 "dma_device_type": 1 00:16:43.377 }, 00:16:43.377 { 00:16:43.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.377 "dma_device_type": 2 00:16:43.377 } 00:16:43.377 ], 00:16:43.377 "driver_specific": {} 00:16:43.377 } 00:16:43.377 ] 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.377 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.378 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.378 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.378 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.378 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.378 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.378 "name": "Existed_Raid", 00:16:43.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.378 "strip_size_kb": 64, 00:16:43.378 "state": "configuring", 00:16:43.378 "raid_level": "raid5f", 00:16:43.378 "superblock": false, 00:16:43.378 "num_base_bdevs": 4, 00:16:43.378 "num_base_bdevs_discovered": 2, 00:16:43.378 "num_base_bdevs_operational": 4, 00:16:43.378 "base_bdevs_list": [ 00:16:43.378 { 00:16:43.378 "name": "BaseBdev1", 00:16:43.378 "uuid": "76bbbac0-fcd8-42bb-9145-b0472ae95605", 00:16:43.378 "is_configured": true, 00:16:43.378 "data_offset": 0, 00:16:43.378 "data_size": 65536 00:16:43.378 }, 00:16:43.378 { 00:16:43.378 "name": "BaseBdev2", 00:16:43.378 "uuid": "4957606f-163f-4c14-88fa-b0da6321db5c", 00:16:43.378 "is_configured": true, 00:16:43.378 "data_offset": 0, 00:16:43.378 "data_size": 65536 00:16:43.378 }, 00:16:43.378 { 00:16:43.378 "name": "BaseBdev3", 00:16:43.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.378 "is_configured": false, 00:16:43.378 "data_offset": 0, 00:16:43.378 "data_size": 0 00:16:43.378 }, 00:16:43.378 { 00:16:43.378 "name": "BaseBdev4", 00:16:43.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.378 "is_configured": false, 00:16:43.378 "data_offset": 0, 00:16:43.378 "data_size": 0 00:16:43.378 } 00:16:43.378 ] 00:16:43.378 }' 00:16:43.378 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.378 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.637 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:43.637 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.637 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.899 [2024-09-30 12:33:55.556205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.899 BaseBdev3 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.899 [ 00:16:43.899 { 00:16:43.899 "name": "BaseBdev3", 00:16:43.899 "aliases": [ 00:16:43.899 "96c00364-5956-4110-9577-419f33cc0689" 00:16:43.899 ], 00:16:43.899 "product_name": "Malloc disk", 00:16:43.899 "block_size": 512, 00:16:43.899 "num_blocks": 65536, 00:16:43.899 "uuid": "96c00364-5956-4110-9577-419f33cc0689", 00:16:43.899 "assigned_rate_limits": { 00:16:43.899 "rw_ios_per_sec": 0, 00:16:43.899 "rw_mbytes_per_sec": 0, 00:16:43.899 "r_mbytes_per_sec": 0, 00:16:43.899 "w_mbytes_per_sec": 0 00:16:43.899 }, 00:16:43.899 "claimed": true, 00:16:43.899 "claim_type": "exclusive_write", 00:16:43.899 "zoned": false, 00:16:43.899 "supported_io_types": { 00:16:43.899 "read": true, 00:16:43.899 "write": true, 00:16:43.899 "unmap": true, 00:16:43.899 "flush": true, 00:16:43.899 "reset": true, 00:16:43.899 "nvme_admin": false, 00:16:43.899 "nvme_io": false, 00:16:43.899 "nvme_io_md": false, 00:16:43.899 "write_zeroes": true, 00:16:43.899 "zcopy": true, 00:16:43.899 "get_zone_info": false, 00:16:43.899 "zone_management": false, 00:16:43.899 "zone_append": false, 00:16:43.899 "compare": false, 00:16:43.899 "compare_and_write": false, 00:16:43.899 "abort": true, 00:16:43.899 "seek_hole": false, 00:16:43.899 "seek_data": false, 00:16:43.899 "copy": true, 00:16:43.899 "nvme_iov_md": false 00:16:43.899 }, 00:16:43.899 "memory_domains": [ 00:16:43.899 { 00:16:43.899 "dma_device_id": "system", 00:16:43.899 "dma_device_type": 1 00:16:43.899 }, 00:16:43.899 { 00:16:43.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.899 "dma_device_type": 2 00:16:43.899 } 00:16:43.899 ], 00:16:43.899 "driver_specific": {} 00:16:43.899 } 00:16:43.899 ] 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.899 "name": "Existed_Raid", 00:16:43.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.899 "strip_size_kb": 64, 00:16:43.899 "state": "configuring", 00:16:43.899 "raid_level": "raid5f", 00:16:43.899 "superblock": false, 00:16:43.899 "num_base_bdevs": 4, 00:16:43.899 "num_base_bdevs_discovered": 3, 00:16:43.899 "num_base_bdevs_operational": 4, 00:16:43.899 "base_bdevs_list": [ 00:16:43.899 { 00:16:43.899 "name": "BaseBdev1", 00:16:43.899 "uuid": "76bbbac0-fcd8-42bb-9145-b0472ae95605", 00:16:43.899 "is_configured": true, 00:16:43.899 "data_offset": 0, 00:16:43.899 "data_size": 65536 00:16:43.899 }, 00:16:43.899 { 00:16:43.899 "name": "BaseBdev2", 00:16:43.899 "uuid": "4957606f-163f-4c14-88fa-b0da6321db5c", 00:16:43.899 "is_configured": true, 00:16:43.899 "data_offset": 0, 00:16:43.899 "data_size": 65536 00:16:43.899 }, 00:16:43.899 { 00:16:43.899 "name": "BaseBdev3", 00:16:43.899 "uuid": "96c00364-5956-4110-9577-419f33cc0689", 00:16:43.899 "is_configured": true, 00:16:43.899 "data_offset": 0, 00:16:43.899 "data_size": 65536 00:16:43.899 }, 00:16:43.899 { 00:16:43.899 "name": "BaseBdev4", 00:16:43.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.899 "is_configured": false, 00:16:43.899 "data_offset": 0, 00:16:43.899 "data_size": 0 00:16:43.899 } 00:16:43.899 ] 00:16:43.899 }' 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.899 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.158 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:44.158 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.158 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.417 [2024-09-30 12:33:56.064531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:44.417 [2024-09-30 12:33:56.064598] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:44.417 [2024-09-30 12:33:56.064611] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:44.417 [2024-09-30 12:33:56.064889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:44.417 [2024-09-30 12:33:56.071910] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:44.417 [2024-09-30 12:33:56.071935] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:44.417 [2024-09-30 12:33:56.072164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.417 BaseBdev4 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.417 [ 00:16:44.417 { 00:16:44.417 "name": "BaseBdev4", 00:16:44.417 "aliases": [ 00:16:44.417 "246be6d1-af03-44ba-9443-394ff0a08b56" 00:16:44.417 ], 00:16:44.417 "product_name": "Malloc disk", 00:16:44.417 "block_size": 512, 00:16:44.417 "num_blocks": 65536, 00:16:44.417 "uuid": "246be6d1-af03-44ba-9443-394ff0a08b56", 00:16:44.417 "assigned_rate_limits": { 00:16:44.417 "rw_ios_per_sec": 0, 00:16:44.417 "rw_mbytes_per_sec": 0, 00:16:44.417 "r_mbytes_per_sec": 0, 00:16:44.417 "w_mbytes_per_sec": 0 00:16:44.417 }, 00:16:44.417 "claimed": true, 00:16:44.417 "claim_type": "exclusive_write", 00:16:44.417 "zoned": false, 00:16:44.417 "supported_io_types": { 00:16:44.417 "read": true, 00:16:44.417 "write": true, 00:16:44.417 "unmap": true, 00:16:44.417 "flush": true, 00:16:44.417 "reset": true, 00:16:44.417 "nvme_admin": false, 00:16:44.417 "nvme_io": false, 00:16:44.417 "nvme_io_md": false, 00:16:44.417 "write_zeroes": true, 00:16:44.417 "zcopy": true, 00:16:44.417 "get_zone_info": false, 00:16:44.417 "zone_management": false, 00:16:44.417 "zone_append": false, 00:16:44.417 "compare": false, 00:16:44.417 "compare_and_write": false, 00:16:44.417 "abort": true, 00:16:44.417 "seek_hole": false, 00:16:44.417 "seek_data": false, 00:16:44.417 "copy": true, 00:16:44.417 "nvme_iov_md": false 00:16:44.417 }, 00:16:44.417 "memory_domains": [ 00:16:44.417 { 00:16:44.417 "dma_device_id": "system", 00:16:44.417 "dma_device_type": 1 00:16:44.417 }, 00:16:44.417 { 00:16:44.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.417 "dma_device_type": 2 00:16:44.417 } 00:16:44.417 ], 00:16:44.417 "driver_specific": {} 00:16:44.417 } 00:16:44.417 ] 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.417 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.417 "name": "Existed_Raid", 00:16:44.417 "uuid": "f2fe2608-ae92-4cad-8040-d4f8240eb7b6", 00:16:44.417 "strip_size_kb": 64, 00:16:44.417 "state": "online", 00:16:44.417 "raid_level": "raid5f", 00:16:44.417 "superblock": false, 00:16:44.417 "num_base_bdevs": 4, 00:16:44.417 "num_base_bdevs_discovered": 4, 00:16:44.417 "num_base_bdevs_operational": 4, 00:16:44.417 "base_bdevs_list": [ 00:16:44.417 { 00:16:44.417 "name": "BaseBdev1", 00:16:44.417 "uuid": "76bbbac0-fcd8-42bb-9145-b0472ae95605", 00:16:44.417 "is_configured": true, 00:16:44.417 "data_offset": 0, 00:16:44.418 "data_size": 65536 00:16:44.418 }, 00:16:44.418 { 00:16:44.418 "name": "BaseBdev2", 00:16:44.418 "uuid": "4957606f-163f-4c14-88fa-b0da6321db5c", 00:16:44.418 "is_configured": true, 00:16:44.418 "data_offset": 0, 00:16:44.418 "data_size": 65536 00:16:44.418 }, 00:16:44.418 { 00:16:44.418 "name": "BaseBdev3", 00:16:44.418 "uuid": "96c00364-5956-4110-9577-419f33cc0689", 00:16:44.418 "is_configured": true, 00:16:44.418 "data_offset": 0, 00:16:44.418 "data_size": 65536 00:16:44.418 }, 00:16:44.418 { 00:16:44.418 "name": "BaseBdev4", 00:16:44.418 "uuid": "246be6d1-af03-44ba-9443-394ff0a08b56", 00:16:44.418 "is_configured": true, 00:16:44.418 "data_offset": 0, 00:16:44.418 "data_size": 65536 00:16:44.418 } 00:16:44.418 ] 00:16:44.418 }' 00:16:44.418 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.418 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.677 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:44.677 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:44.677 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.677 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.677 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.677 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.677 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:44.677 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.677 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.677 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.677 [2024-09-30 12:33:56.543516] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.677 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.936 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.936 "name": "Existed_Raid", 00:16:44.936 "aliases": [ 00:16:44.936 "f2fe2608-ae92-4cad-8040-d4f8240eb7b6" 00:16:44.936 ], 00:16:44.936 "product_name": "Raid Volume", 00:16:44.936 "block_size": 512, 00:16:44.936 "num_blocks": 196608, 00:16:44.936 "uuid": "f2fe2608-ae92-4cad-8040-d4f8240eb7b6", 00:16:44.936 "assigned_rate_limits": { 00:16:44.936 "rw_ios_per_sec": 0, 00:16:44.936 "rw_mbytes_per_sec": 0, 00:16:44.936 "r_mbytes_per_sec": 0, 00:16:44.936 "w_mbytes_per_sec": 0 00:16:44.936 }, 00:16:44.936 "claimed": false, 00:16:44.936 "zoned": false, 00:16:44.936 "supported_io_types": { 00:16:44.936 "read": true, 00:16:44.936 "write": true, 00:16:44.936 "unmap": false, 00:16:44.936 "flush": false, 00:16:44.936 "reset": true, 00:16:44.936 "nvme_admin": false, 00:16:44.937 "nvme_io": false, 00:16:44.937 "nvme_io_md": false, 00:16:44.937 "write_zeroes": true, 00:16:44.937 "zcopy": false, 00:16:44.937 "get_zone_info": false, 00:16:44.937 "zone_management": false, 00:16:44.937 "zone_append": false, 00:16:44.937 "compare": false, 00:16:44.937 "compare_and_write": false, 00:16:44.937 "abort": false, 00:16:44.937 "seek_hole": false, 00:16:44.937 "seek_data": false, 00:16:44.937 "copy": false, 00:16:44.937 "nvme_iov_md": false 00:16:44.937 }, 00:16:44.937 "driver_specific": { 00:16:44.937 "raid": { 00:16:44.937 "uuid": "f2fe2608-ae92-4cad-8040-d4f8240eb7b6", 00:16:44.937 "strip_size_kb": 64, 00:16:44.937 "state": "online", 00:16:44.937 "raid_level": "raid5f", 00:16:44.937 "superblock": false, 00:16:44.937 "num_base_bdevs": 4, 00:16:44.937 "num_base_bdevs_discovered": 4, 00:16:44.937 "num_base_bdevs_operational": 4, 00:16:44.937 "base_bdevs_list": [ 00:16:44.937 { 00:16:44.937 "name": "BaseBdev1", 00:16:44.937 "uuid": "76bbbac0-fcd8-42bb-9145-b0472ae95605", 00:16:44.937 "is_configured": true, 00:16:44.937 "data_offset": 0, 00:16:44.937 "data_size": 65536 00:16:44.937 }, 00:16:44.937 { 00:16:44.937 "name": "BaseBdev2", 00:16:44.937 "uuid": "4957606f-163f-4c14-88fa-b0da6321db5c", 00:16:44.937 "is_configured": true, 00:16:44.937 "data_offset": 0, 00:16:44.937 "data_size": 65536 00:16:44.937 }, 00:16:44.937 { 00:16:44.937 "name": "BaseBdev3", 00:16:44.937 "uuid": "96c00364-5956-4110-9577-419f33cc0689", 00:16:44.937 "is_configured": true, 00:16:44.937 "data_offset": 0, 00:16:44.937 "data_size": 65536 00:16:44.937 }, 00:16:44.937 { 00:16:44.937 "name": "BaseBdev4", 00:16:44.937 "uuid": "246be6d1-af03-44ba-9443-394ff0a08b56", 00:16:44.937 "is_configured": true, 00:16:44.937 "data_offset": 0, 00:16:44.937 "data_size": 65536 00:16:44.937 } 00:16:44.937 ] 00:16:44.937 } 00:16:44.937 } 00:16:44.937 }' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:44.937 BaseBdev2 00:16:44.937 BaseBdev3 00:16:44.937 BaseBdev4' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.937 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.196 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.196 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.196 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.196 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:45.196 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.196 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.196 [2024-09-30 12:33:56.842867] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.196 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.196 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:45.196 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:45.196 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.197 "name": "Existed_Raid", 00:16:45.197 "uuid": "f2fe2608-ae92-4cad-8040-d4f8240eb7b6", 00:16:45.197 "strip_size_kb": 64, 00:16:45.197 "state": "online", 00:16:45.197 "raid_level": "raid5f", 00:16:45.197 "superblock": false, 00:16:45.197 "num_base_bdevs": 4, 00:16:45.197 "num_base_bdevs_discovered": 3, 00:16:45.197 "num_base_bdevs_operational": 3, 00:16:45.197 "base_bdevs_list": [ 00:16:45.197 { 00:16:45.197 "name": null, 00:16:45.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.197 "is_configured": false, 00:16:45.197 "data_offset": 0, 00:16:45.197 "data_size": 65536 00:16:45.197 }, 00:16:45.197 { 00:16:45.197 "name": "BaseBdev2", 00:16:45.197 "uuid": "4957606f-163f-4c14-88fa-b0da6321db5c", 00:16:45.197 "is_configured": true, 00:16:45.197 "data_offset": 0, 00:16:45.197 "data_size": 65536 00:16:45.197 }, 00:16:45.197 { 00:16:45.197 "name": "BaseBdev3", 00:16:45.197 "uuid": "96c00364-5956-4110-9577-419f33cc0689", 00:16:45.197 "is_configured": true, 00:16:45.197 "data_offset": 0, 00:16:45.197 "data_size": 65536 00:16:45.197 }, 00:16:45.197 { 00:16:45.197 "name": "BaseBdev4", 00:16:45.197 "uuid": "246be6d1-af03-44ba-9443-394ff0a08b56", 00:16:45.197 "is_configured": true, 00:16:45.197 "data_offset": 0, 00:16:45.197 "data_size": 65536 00:16:45.197 } 00:16:45.197 ] 00:16:45.197 }' 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.197 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.768 [2024-09-30 12:33:57.418289] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:45.768 [2024-09-30 12:33:57.418388] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.768 [2024-09-30 12:33:57.508193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.768 [2024-09-30 12:33:57.564137] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.768 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.032 [2024-09-30 12:33:57.708224] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:46.032 [2024-09-30 12:33:57.708276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.032 BaseBdev2 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:46.032 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.033 [ 00:16:46.033 { 00:16:46.033 "name": "BaseBdev2", 00:16:46.033 "aliases": [ 00:16:46.033 "411c2974-8cf9-4b70-bff3-69977a6cdc36" 00:16:46.033 ], 00:16:46.033 "product_name": "Malloc disk", 00:16:46.033 "block_size": 512, 00:16:46.033 "num_blocks": 65536, 00:16:46.033 "uuid": "411c2974-8cf9-4b70-bff3-69977a6cdc36", 00:16:46.033 "assigned_rate_limits": { 00:16:46.033 "rw_ios_per_sec": 0, 00:16:46.033 "rw_mbytes_per_sec": 0, 00:16:46.033 "r_mbytes_per_sec": 0, 00:16:46.033 "w_mbytes_per_sec": 0 00:16:46.033 }, 00:16:46.033 "claimed": false, 00:16:46.033 "zoned": false, 00:16:46.033 "supported_io_types": { 00:16:46.033 "read": true, 00:16:46.033 "write": true, 00:16:46.033 "unmap": true, 00:16:46.033 "flush": true, 00:16:46.033 "reset": true, 00:16:46.033 "nvme_admin": false, 00:16:46.033 "nvme_io": false, 00:16:46.033 "nvme_io_md": false, 00:16:46.033 "write_zeroes": true, 00:16:46.033 "zcopy": true, 00:16:46.033 "get_zone_info": false, 00:16:46.033 "zone_management": false, 00:16:46.033 "zone_append": false, 00:16:46.033 "compare": false, 00:16:46.033 "compare_and_write": false, 00:16:46.033 "abort": true, 00:16:46.033 "seek_hole": false, 00:16:46.033 "seek_data": false, 00:16:46.033 "copy": true, 00:16:46.033 "nvme_iov_md": false 00:16:46.033 }, 00:16:46.033 "memory_domains": [ 00:16:46.033 { 00:16:46.033 "dma_device_id": "system", 00:16:46.033 "dma_device_type": 1 00:16:46.033 }, 00:16:46.033 { 00:16:46.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.033 "dma_device_type": 2 00:16:46.033 } 00:16:46.033 ], 00:16:46.033 "driver_specific": {} 00:16:46.033 } 00:16:46.033 ] 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.033 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.296 BaseBdev3 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.296 [ 00:16:46.296 { 00:16:46.296 "name": "BaseBdev3", 00:16:46.296 "aliases": [ 00:16:46.296 "9d7b0dbe-0cab-437c-9e95-1e78ff6b04d5" 00:16:46.296 ], 00:16:46.296 "product_name": "Malloc disk", 00:16:46.296 "block_size": 512, 00:16:46.296 "num_blocks": 65536, 00:16:46.296 "uuid": "9d7b0dbe-0cab-437c-9e95-1e78ff6b04d5", 00:16:46.296 "assigned_rate_limits": { 00:16:46.296 "rw_ios_per_sec": 0, 00:16:46.296 "rw_mbytes_per_sec": 0, 00:16:46.296 "r_mbytes_per_sec": 0, 00:16:46.296 "w_mbytes_per_sec": 0 00:16:46.296 }, 00:16:46.296 "claimed": false, 00:16:46.296 "zoned": false, 00:16:46.296 "supported_io_types": { 00:16:46.296 "read": true, 00:16:46.296 "write": true, 00:16:46.296 "unmap": true, 00:16:46.296 "flush": true, 00:16:46.296 "reset": true, 00:16:46.296 "nvme_admin": false, 00:16:46.296 "nvme_io": false, 00:16:46.296 "nvme_io_md": false, 00:16:46.296 "write_zeroes": true, 00:16:46.296 "zcopy": true, 00:16:46.296 "get_zone_info": false, 00:16:46.296 "zone_management": false, 00:16:46.296 "zone_append": false, 00:16:46.296 "compare": false, 00:16:46.296 "compare_and_write": false, 00:16:46.296 "abort": true, 00:16:46.296 "seek_hole": false, 00:16:46.296 "seek_data": false, 00:16:46.296 "copy": true, 00:16:46.296 "nvme_iov_md": false 00:16:46.296 }, 00:16:46.296 "memory_domains": [ 00:16:46.296 { 00:16:46.296 "dma_device_id": "system", 00:16:46.296 "dma_device_type": 1 00:16:46.296 }, 00:16:46.296 { 00:16:46.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.296 "dma_device_type": 2 00:16:46.296 } 00:16:46.296 ], 00:16:46.296 "driver_specific": {} 00:16:46.296 } 00:16:46.296 ] 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.296 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.297 BaseBdev4 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.297 [ 00:16:46.297 { 00:16:46.297 "name": "BaseBdev4", 00:16:46.297 "aliases": [ 00:16:46.297 "ba222d1c-5caa-4586-b4f3-a378834ac75a" 00:16:46.297 ], 00:16:46.297 "product_name": "Malloc disk", 00:16:46.297 "block_size": 512, 00:16:46.297 "num_blocks": 65536, 00:16:46.297 "uuid": "ba222d1c-5caa-4586-b4f3-a378834ac75a", 00:16:46.297 "assigned_rate_limits": { 00:16:46.297 "rw_ios_per_sec": 0, 00:16:46.297 "rw_mbytes_per_sec": 0, 00:16:46.297 "r_mbytes_per_sec": 0, 00:16:46.297 "w_mbytes_per_sec": 0 00:16:46.297 }, 00:16:46.297 "claimed": false, 00:16:46.297 "zoned": false, 00:16:46.297 "supported_io_types": { 00:16:46.297 "read": true, 00:16:46.297 "write": true, 00:16:46.297 "unmap": true, 00:16:46.297 "flush": true, 00:16:46.297 "reset": true, 00:16:46.297 "nvme_admin": false, 00:16:46.297 "nvme_io": false, 00:16:46.297 "nvme_io_md": false, 00:16:46.297 "write_zeroes": true, 00:16:46.297 "zcopy": true, 00:16:46.297 "get_zone_info": false, 00:16:46.297 "zone_management": false, 00:16:46.297 "zone_append": false, 00:16:46.297 "compare": false, 00:16:46.297 "compare_and_write": false, 00:16:46.297 "abort": true, 00:16:46.297 "seek_hole": false, 00:16:46.297 "seek_data": false, 00:16:46.297 "copy": true, 00:16:46.297 "nvme_iov_md": false 00:16:46.297 }, 00:16:46.297 "memory_domains": [ 00:16:46.297 { 00:16:46.297 "dma_device_id": "system", 00:16:46.297 "dma_device_type": 1 00:16:46.297 }, 00:16:46.297 { 00:16:46.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.297 "dma_device_type": 2 00:16:46.297 } 00:16:46.297 ], 00:16:46.297 "driver_specific": {} 00:16:46.297 } 00:16:46.297 ] 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.297 [2024-09-30 12:33:58.080633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.297 [2024-09-30 12:33:58.080683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.297 [2024-09-30 12:33:58.080702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.297 [2024-09-30 12:33:58.082323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.297 [2024-09-30 12:33:58.082374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.297 "name": "Existed_Raid", 00:16:46.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.297 "strip_size_kb": 64, 00:16:46.297 "state": "configuring", 00:16:46.297 "raid_level": "raid5f", 00:16:46.297 "superblock": false, 00:16:46.297 "num_base_bdevs": 4, 00:16:46.297 "num_base_bdevs_discovered": 3, 00:16:46.297 "num_base_bdevs_operational": 4, 00:16:46.297 "base_bdevs_list": [ 00:16:46.297 { 00:16:46.297 "name": "BaseBdev1", 00:16:46.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.297 "is_configured": false, 00:16:46.297 "data_offset": 0, 00:16:46.297 "data_size": 0 00:16:46.297 }, 00:16:46.297 { 00:16:46.297 "name": "BaseBdev2", 00:16:46.297 "uuid": "411c2974-8cf9-4b70-bff3-69977a6cdc36", 00:16:46.297 "is_configured": true, 00:16:46.297 "data_offset": 0, 00:16:46.297 "data_size": 65536 00:16:46.297 }, 00:16:46.297 { 00:16:46.297 "name": "BaseBdev3", 00:16:46.297 "uuid": "9d7b0dbe-0cab-437c-9e95-1e78ff6b04d5", 00:16:46.297 "is_configured": true, 00:16:46.297 "data_offset": 0, 00:16:46.297 "data_size": 65536 00:16:46.297 }, 00:16:46.297 { 00:16:46.297 "name": "BaseBdev4", 00:16:46.297 "uuid": "ba222d1c-5caa-4586-b4f3-a378834ac75a", 00:16:46.297 "is_configured": true, 00:16:46.297 "data_offset": 0, 00:16:46.297 "data_size": 65536 00:16:46.297 } 00:16:46.297 ] 00:16:46.297 }' 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.297 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.868 [2024-09-30 12:33:58.531836] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.868 "name": "Existed_Raid", 00:16:46.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.868 "strip_size_kb": 64, 00:16:46.868 "state": "configuring", 00:16:46.868 "raid_level": "raid5f", 00:16:46.868 "superblock": false, 00:16:46.868 "num_base_bdevs": 4, 00:16:46.868 "num_base_bdevs_discovered": 2, 00:16:46.868 "num_base_bdevs_operational": 4, 00:16:46.868 "base_bdevs_list": [ 00:16:46.868 { 00:16:46.868 "name": "BaseBdev1", 00:16:46.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.868 "is_configured": false, 00:16:46.868 "data_offset": 0, 00:16:46.868 "data_size": 0 00:16:46.868 }, 00:16:46.868 { 00:16:46.868 "name": null, 00:16:46.868 "uuid": "411c2974-8cf9-4b70-bff3-69977a6cdc36", 00:16:46.868 "is_configured": false, 00:16:46.868 "data_offset": 0, 00:16:46.868 "data_size": 65536 00:16:46.868 }, 00:16:46.868 { 00:16:46.868 "name": "BaseBdev3", 00:16:46.868 "uuid": "9d7b0dbe-0cab-437c-9e95-1e78ff6b04d5", 00:16:46.868 "is_configured": true, 00:16:46.868 "data_offset": 0, 00:16:46.868 "data_size": 65536 00:16:46.868 }, 00:16:46.868 { 00:16:46.868 "name": "BaseBdev4", 00:16:46.868 "uuid": "ba222d1c-5caa-4586-b4f3-a378834ac75a", 00:16:46.868 "is_configured": true, 00:16:46.868 "data_offset": 0, 00:16:46.868 "data_size": 65536 00:16:46.868 } 00:16:46.868 ] 00:16:46.868 }' 00:16:46.868 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.869 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.129 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:47.129 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.129 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.129 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.129 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.129 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:47.129 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:47.129 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.129 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.129 [2024-09-30 12:33:59.009700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.129 BaseBdev1 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.129 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.389 [ 00:16:47.389 { 00:16:47.389 "name": "BaseBdev1", 00:16:47.389 "aliases": [ 00:16:47.389 "eb8d4838-7a96-4cf2-9db7-75df58621ed4" 00:16:47.389 ], 00:16:47.389 "product_name": "Malloc disk", 00:16:47.389 "block_size": 512, 00:16:47.389 "num_blocks": 65536, 00:16:47.389 "uuid": "eb8d4838-7a96-4cf2-9db7-75df58621ed4", 00:16:47.389 "assigned_rate_limits": { 00:16:47.389 "rw_ios_per_sec": 0, 00:16:47.389 "rw_mbytes_per_sec": 0, 00:16:47.389 "r_mbytes_per_sec": 0, 00:16:47.389 "w_mbytes_per_sec": 0 00:16:47.389 }, 00:16:47.389 "claimed": true, 00:16:47.389 "claim_type": "exclusive_write", 00:16:47.389 "zoned": false, 00:16:47.389 "supported_io_types": { 00:16:47.389 "read": true, 00:16:47.389 "write": true, 00:16:47.389 "unmap": true, 00:16:47.389 "flush": true, 00:16:47.389 "reset": true, 00:16:47.389 "nvme_admin": false, 00:16:47.389 "nvme_io": false, 00:16:47.389 "nvme_io_md": false, 00:16:47.389 "write_zeroes": true, 00:16:47.389 "zcopy": true, 00:16:47.389 "get_zone_info": false, 00:16:47.389 "zone_management": false, 00:16:47.389 "zone_append": false, 00:16:47.389 "compare": false, 00:16:47.389 "compare_and_write": false, 00:16:47.389 "abort": true, 00:16:47.389 "seek_hole": false, 00:16:47.389 "seek_data": false, 00:16:47.389 "copy": true, 00:16:47.389 "nvme_iov_md": false 00:16:47.389 }, 00:16:47.389 "memory_domains": [ 00:16:47.389 { 00:16:47.389 "dma_device_id": "system", 00:16:47.389 "dma_device_type": 1 00:16:47.389 }, 00:16:47.389 { 00:16:47.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.389 "dma_device_type": 2 00:16:47.389 } 00:16:47.389 ], 00:16:47.389 "driver_specific": {} 00:16:47.389 } 00:16:47.389 ] 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.389 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.389 "name": "Existed_Raid", 00:16:47.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.389 "strip_size_kb": 64, 00:16:47.389 "state": "configuring", 00:16:47.389 "raid_level": "raid5f", 00:16:47.389 "superblock": false, 00:16:47.389 "num_base_bdevs": 4, 00:16:47.389 "num_base_bdevs_discovered": 3, 00:16:47.389 "num_base_bdevs_operational": 4, 00:16:47.389 "base_bdevs_list": [ 00:16:47.389 { 00:16:47.389 "name": "BaseBdev1", 00:16:47.389 "uuid": "eb8d4838-7a96-4cf2-9db7-75df58621ed4", 00:16:47.389 "is_configured": true, 00:16:47.389 "data_offset": 0, 00:16:47.389 "data_size": 65536 00:16:47.389 }, 00:16:47.389 { 00:16:47.389 "name": null, 00:16:47.389 "uuid": "411c2974-8cf9-4b70-bff3-69977a6cdc36", 00:16:47.389 "is_configured": false, 00:16:47.389 "data_offset": 0, 00:16:47.389 "data_size": 65536 00:16:47.389 }, 00:16:47.389 { 00:16:47.390 "name": "BaseBdev3", 00:16:47.390 "uuid": "9d7b0dbe-0cab-437c-9e95-1e78ff6b04d5", 00:16:47.390 "is_configured": true, 00:16:47.390 "data_offset": 0, 00:16:47.390 "data_size": 65536 00:16:47.390 }, 00:16:47.390 { 00:16:47.390 "name": "BaseBdev4", 00:16:47.390 "uuid": "ba222d1c-5caa-4586-b4f3-a378834ac75a", 00:16:47.390 "is_configured": true, 00:16:47.390 "data_offset": 0, 00:16:47.390 "data_size": 65536 00:16:47.390 } 00:16:47.390 ] 00:16:47.390 }' 00:16:47.390 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.390 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.650 [2024-09-30 12:33:59.524835] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.650 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.910 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.910 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.910 "name": "Existed_Raid", 00:16:47.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.910 "strip_size_kb": 64, 00:16:47.910 "state": "configuring", 00:16:47.910 "raid_level": "raid5f", 00:16:47.910 "superblock": false, 00:16:47.910 "num_base_bdevs": 4, 00:16:47.910 "num_base_bdevs_discovered": 2, 00:16:47.910 "num_base_bdevs_operational": 4, 00:16:47.910 "base_bdevs_list": [ 00:16:47.910 { 00:16:47.910 "name": "BaseBdev1", 00:16:47.910 "uuid": "eb8d4838-7a96-4cf2-9db7-75df58621ed4", 00:16:47.910 "is_configured": true, 00:16:47.910 "data_offset": 0, 00:16:47.910 "data_size": 65536 00:16:47.910 }, 00:16:47.910 { 00:16:47.910 "name": null, 00:16:47.910 "uuid": "411c2974-8cf9-4b70-bff3-69977a6cdc36", 00:16:47.910 "is_configured": false, 00:16:47.910 "data_offset": 0, 00:16:47.910 "data_size": 65536 00:16:47.910 }, 00:16:47.910 { 00:16:47.910 "name": null, 00:16:47.910 "uuid": "9d7b0dbe-0cab-437c-9e95-1e78ff6b04d5", 00:16:47.910 "is_configured": false, 00:16:47.910 "data_offset": 0, 00:16:47.910 "data_size": 65536 00:16:47.910 }, 00:16:47.910 { 00:16:47.910 "name": "BaseBdev4", 00:16:47.910 "uuid": "ba222d1c-5caa-4586-b4f3-a378834ac75a", 00:16:47.910 "is_configured": true, 00:16:47.910 "data_offset": 0, 00:16:47.910 "data_size": 65536 00:16:47.910 } 00:16:47.910 ] 00:16:47.910 }' 00:16:47.910 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.910 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.170 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.170 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.170 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.170 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.170 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.170 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:48.170 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:48.170 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.170 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.170 [2024-09-30 12:34:00.004090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.170 "name": "Existed_Raid", 00:16:48.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.170 "strip_size_kb": 64, 00:16:48.170 "state": "configuring", 00:16:48.170 "raid_level": "raid5f", 00:16:48.170 "superblock": false, 00:16:48.170 "num_base_bdevs": 4, 00:16:48.170 "num_base_bdevs_discovered": 3, 00:16:48.170 "num_base_bdevs_operational": 4, 00:16:48.170 "base_bdevs_list": [ 00:16:48.170 { 00:16:48.170 "name": "BaseBdev1", 00:16:48.170 "uuid": "eb8d4838-7a96-4cf2-9db7-75df58621ed4", 00:16:48.170 "is_configured": true, 00:16:48.170 "data_offset": 0, 00:16:48.170 "data_size": 65536 00:16:48.170 }, 00:16:48.170 { 00:16:48.170 "name": null, 00:16:48.170 "uuid": "411c2974-8cf9-4b70-bff3-69977a6cdc36", 00:16:48.170 "is_configured": false, 00:16:48.170 "data_offset": 0, 00:16:48.170 "data_size": 65536 00:16:48.170 }, 00:16:48.170 { 00:16:48.170 "name": "BaseBdev3", 00:16:48.170 "uuid": "9d7b0dbe-0cab-437c-9e95-1e78ff6b04d5", 00:16:48.170 "is_configured": true, 00:16:48.170 "data_offset": 0, 00:16:48.170 "data_size": 65536 00:16:48.170 }, 00:16:48.170 { 00:16:48.170 "name": "BaseBdev4", 00:16:48.170 "uuid": "ba222d1c-5caa-4586-b4f3-a378834ac75a", 00:16:48.170 "is_configured": true, 00:16:48.170 "data_offset": 0, 00:16:48.170 "data_size": 65536 00:16:48.170 } 00:16:48.170 ] 00:16:48.170 }' 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.170 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.741 [2024-09-30 12:34:00.503241] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.741 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.001 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.001 "name": "Existed_Raid", 00:16:49.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.001 "strip_size_kb": 64, 00:16:49.001 "state": "configuring", 00:16:49.001 "raid_level": "raid5f", 00:16:49.001 "superblock": false, 00:16:49.001 "num_base_bdevs": 4, 00:16:49.001 "num_base_bdevs_discovered": 2, 00:16:49.001 "num_base_bdevs_operational": 4, 00:16:49.001 "base_bdevs_list": [ 00:16:49.001 { 00:16:49.001 "name": null, 00:16:49.001 "uuid": "eb8d4838-7a96-4cf2-9db7-75df58621ed4", 00:16:49.001 "is_configured": false, 00:16:49.001 "data_offset": 0, 00:16:49.001 "data_size": 65536 00:16:49.001 }, 00:16:49.001 { 00:16:49.001 "name": null, 00:16:49.001 "uuid": "411c2974-8cf9-4b70-bff3-69977a6cdc36", 00:16:49.001 "is_configured": false, 00:16:49.001 "data_offset": 0, 00:16:49.001 "data_size": 65536 00:16:49.001 }, 00:16:49.001 { 00:16:49.001 "name": "BaseBdev3", 00:16:49.001 "uuid": "9d7b0dbe-0cab-437c-9e95-1e78ff6b04d5", 00:16:49.001 "is_configured": true, 00:16:49.001 "data_offset": 0, 00:16:49.001 "data_size": 65536 00:16:49.001 }, 00:16:49.001 { 00:16:49.001 "name": "BaseBdev4", 00:16:49.001 "uuid": "ba222d1c-5caa-4586-b4f3-a378834ac75a", 00:16:49.001 "is_configured": true, 00:16:49.001 "data_offset": 0, 00:16:49.001 "data_size": 65536 00:16:49.001 } 00:16:49.001 ] 00:16:49.001 }' 00:16:49.001 12:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.001 12:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.261 [2024-09-30 12:34:01.075450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.261 "name": "Existed_Raid", 00:16:49.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.261 "strip_size_kb": 64, 00:16:49.261 "state": "configuring", 00:16:49.261 "raid_level": "raid5f", 00:16:49.261 "superblock": false, 00:16:49.261 "num_base_bdevs": 4, 00:16:49.261 "num_base_bdevs_discovered": 3, 00:16:49.261 "num_base_bdevs_operational": 4, 00:16:49.261 "base_bdevs_list": [ 00:16:49.261 { 00:16:49.261 "name": null, 00:16:49.261 "uuid": "eb8d4838-7a96-4cf2-9db7-75df58621ed4", 00:16:49.261 "is_configured": false, 00:16:49.261 "data_offset": 0, 00:16:49.261 "data_size": 65536 00:16:49.261 }, 00:16:49.261 { 00:16:49.261 "name": "BaseBdev2", 00:16:49.261 "uuid": "411c2974-8cf9-4b70-bff3-69977a6cdc36", 00:16:49.261 "is_configured": true, 00:16:49.261 "data_offset": 0, 00:16:49.261 "data_size": 65536 00:16:49.261 }, 00:16:49.261 { 00:16:49.261 "name": "BaseBdev3", 00:16:49.261 "uuid": "9d7b0dbe-0cab-437c-9e95-1e78ff6b04d5", 00:16:49.261 "is_configured": true, 00:16:49.261 "data_offset": 0, 00:16:49.261 "data_size": 65536 00:16:49.261 }, 00:16:49.261 { 00:16:49.261 "name": "BaseBdev4", 00:16:49.261 "uuid": "ba222d1c-5caa-4586-b4f3-a378834ac75a", 00:16:49.261 "is_configured": true, 00:16:49.261 "data_offset": 0, 00:16:49.261 "data_size": 65536 00:16:49.261 } 00:16:49.261 ] 00:16:49.261 }' 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.261 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u eb8d4838-7a96-4cf2-9db7-75df58621ed4 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 [2024-09-30 12:34:01.617325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:49.832 [2024-09-30 12:34:01.617376] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:49.832 [2024-09-30 12:34:01.617383] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:49.832 [2024-09-30 12:34:01.617613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:49.832 [2024-09-30 12:34:01.624602] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:49.832 [2024-09-30 12:34:01.624687] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:49.832 [2024-09-30 12:34:01.624938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.832 NewBaseBdev 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 [ 00:16:49.832 { 00:16:49.832 "name": "NewBaseBdev", 00:16:49.832 "aliases": [ 00:16:49.832 "eb8d4838-7a96-4cf2-9db7-75df58621ed4" 00:16:49.832 ], 00:16:49.832 "product_name": "Malloc disk", 00:16:49.832 "block_size": 512, 00:16:49.832 "num_blocks": 65536, 00:16:49.832 "uuid": "eb8d4838-7a96-4cf2-9db7-75df58621ed4", 00:16:49.832 "assigned_rate_limits": { 00:16:49.832 "rw_ios_per_sec": 0, 00:16:49.832 "rw_mbytes_per_sec": 0, 00:16:49.832 "r_mbytes_per_sec": 0, 00:16:49.832 "w_mbytes_per_sec": 0 00:16:49.832 }, 00:16:49.832 "claimed": true, 00:16:49.832 "claim_type": "exclusive_write", 00:16:49.832 "zoned": false, 00:16:49.832 "supported_io_types": { 00:16:49.832 "read": true, 00:16:49.832 "write": true, 00:16:49.832 "unmap": true, 00:16:49.832 "flush": true, 00:16:49.832 "reset": true, 00:16:49.832 "nvme_admin": false, 00:16:49.832 "nvme_io": false, 00:16:49.832 "nvme_io_md": false, 00:16:49.832 "write_zeroes": true, 00:16:49.832 "zcopy": true, 00:16:49.832 "get_zone_info": false, 00:16:49.832 "zone_management": false, 00:16:49.832 "zone_append": false, 00:16:49.832 "compare": false, 00:16:49.832 "compare_and_write": false, 00:16:49.832 "abort": true, 00:16:49.832 "seek_hole": false, 00:16:49.832 "seek_data": false, 00:16:49.832 "copy": true, 00:16:49.832 "nvme_iov_md": false 00:16:49.832 }, 00:16:49.832 "memory_domains": [ 00:16:49.832 { 00:16:49.832 "dma_device_id": "system", 00:16:49.832 "dma_device_type": 1 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.832 "dma_device_type": 2 00:16:49.832 } 00:16:49.832 ], 00:16:49.832 "driver_specific": {} 00:16:49.832 } 00:16:49.832 ] 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.832 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.832 "name": "Existed_Raid", 00:16:49.832 "uuid": "0fa72a59-6a1c-4792-a426-07554bab6ea6", 00:16:49.832 "strip_size_kb": 64, 00:16:49.832 "state": "online", 00:16:49.832 "raid_level": "raid5f", 00:16:49.832 "superblock": false, 00:16:49.832 "num_base_bdevs": 4, 00:16:49.832 "num_base_bdevs_discovered": 4, 00:16:49.832 "num_base_bdevs_operational": 4, 00:16:49.832 "base_bdevs_list": [ 00:16:49.832 { 00:16:49.832 "name": "NewBaseBdev", 00:16:49.832 "uuid": "eb8d4838-7a96-4cf2-9db7-75df58621ed4", 00:16:49.832 "is_configured": true, 00:16:49.832 "data_offset": 0, 00:16:49.832 "data_size": 65536 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "name": "BaseBdev2", 00:16:49.832 "uuid": "411c2974-8cf9-4b70-bff3-69977a6cdc36", 00:16:49.832 "is_configured": true, 00:16:49.832 "data_offset": 0, 00:16:49.832 "data_size": 65536 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "name": "BaseBdev3", 00:16:49.832 "uuid": "9d7b0dbe-0cab-437c-9e95-1e78ff6b04d5", 00:16:49.832 "is_configured": true, 00:16:49.832 "data_offset": 0, 00:16:49.832 "data_size": 65536 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "name": "BaseBdev4", 00:16:49.832 "uuid": "ba222d1c-5caa-4586-b4f3-a378834ac75a", 00:16:49.833 "is_configured": true, 00:16:49.833 "data_offset": 0, 00:16:49.833 "data_size": 65536 00:16:49.833 } 00:16:49.833 ] 00:16:49.833 }' 00:16:49.833 12:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.833 12:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.402 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.403 [2024-09-30 12:34:02.112090] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.403 "name": "Existed_Raid", 00:16:50.403 "aliases": [ 00:16:50.403 "0fa72a59-6a1c-4792-a426-07554bab6ea6" 00:16:50.403 ], 00:16:50.403 "product_name": "Raid Volume", 00:16:50.403 "block_size": 512, 00:16:50.403 "num_blocks": 196608, 00:16:50.403 "uuid": "0fa72a59-6a1c-4792-a426-07554bab6ea6", 00:16:50.403 "assigned_rate_limits": { 00:16:50.403 "rw_ios_per_sec": 0, 00:16:50.403 "rw_mbytes_per_sec": 0, 00:16:50.403 "r_mbytes_per_sec": 0, 00:16:50.403 "w_mbytes_per_sec": 0 00:16:50.403 }, 00:16:50.403 "claimed": false, 00:16:50.403 "zoned": false, 00:16:50.403 "supported_io_types": { 00:16:50.403 "read": true, 00:16:50.403 "write": true, 00:16:50.403 "unmap": false, 00:16:50.403 "flush": false, 00:16:50.403 "reset": true, 00:16:50.403 "nvme_admin": false, 00:16:50.403 "nvme_io": false, 00:16:50.403 "nvme_io_md": false, 00:16:50.403 "write_zeroes": true, 00:16:50.403 "zcopy": false, 00:16:50.403 "get_zone_info": false, 00:16:50.403 "zone_management": false, 00:16:50.403 "zone_append": false, 00:16:50.403 "compare": false, 00:16:50.403 "compare_and_write": false, 00:16:50.403 "abort": false, 00:16:50.403 "seek_hole": false, 00:16:50.403 "seek_data": false, 00:16:50.403 "copy": false, 00:16:50.403 "nvme_iov_md": false 00:16:50.403 }, 00:16:50.403 "driver_specific": { 00:16:50.403 "raid": { 00:16:50.403 "uuid": "0fa72a59-6a1c-4792-a426-07554bab6ea6", 00:16:50.403 "strip_size_kb": 64, 00:16:50.403 "state": "online", 00:16:50.403 "raid_level": "raid5f", 00:16:50.403 "superblock": false, 00:16:50.403 "num_base_bdevs": 4, 00:16:50.403 "num_base_bdevs_discovered": 4, 00:16:50.403 "num_base_bdevs_operational": 4, 00:16:50.403 "base_bdevs_list": [ 00:16:50.403 { 00:16:50.403 "name": "NewBaseBdev", 00:16:50.403 "uuid": "eb8d4838-7a96-4cf2-9db7-75df58621ed4", 00:16:50.403 "is_configured": true, 00:16:50.403 "data_offset": 0, 00:16:50.403 "data_size": 65536 00:16:50.403 }, 00:16:50.403 { 00:16:50.403 "name": "BaseBdev2", 00:16:50.403 "uuid": "411c2974-8cf9-4b70-bff3-69977a6cdc36", 00:16:50.403 "is_configured": true, 00:16:50.403 "data_offset": 0, 00:16:50.403 "data_size": 65536 00:16:50.403 }, 00:16:50.403 { 00:16:50.403 "name": "BaseBdev3", 00:16:50.403 "uuid": "9d7b0dbe-0cab-437c-9e95-1e78ff6b04d5", 00:16:50.403 "is_configured": true, 00:16:50.403 "data_offset": 0, 00:16:50.403 "data_size": 65536 00:16:50.403 }, 00:16:50.403 { 00:16:50.403 "name": "BaseBdev4", 00:16:50.403 "uuid": "ba222d1c-5caa-4586-b4f3-a378834ac75a", 00:16:50.403 "is_configured": true, 00:16:50.403 "data_offset": 0, 00:16:50.403 "data_size": 65536 00:16:50.403 } 00:16:50.403 ] 00:16:50.403 } 00:16:50.403 } 00:16:50.403 }' 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:50.403 BaseBdev2 00:16:50.403 BaseBdev3 00:16:50.403 BaseBdev4' 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.403 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.663 [2024-09-30 12:34:02.459333] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.663 [2024-09-30 12:34:02.459362] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.663 [2024-09-30 12:34:02.459427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.663 [2024-09-30 12:34:02.459710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.663 [2024-09-30 12:34:02.459721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82611 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82611 ']' 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82611 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82611 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82611' 00:16:50.663 killing process with pid 82611 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 82611 00:16:50.663 [2024-09-30 12:34:02.508632] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.663 12:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 82611 00:16:51.234 [2024-09-30 12:34:02.876662] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.172 12:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:52.172 00:16:52.172 real 0m11.464s 00:16:52.172 user 0m18.112s 00:16:52.172 sys 0m2.232s 00:16:52.173 12:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.173 ************************************ 00:16:52.173 END TEST raid5f_state_function_test 00:16:52.173 ************************************ 00:16:52.173 12:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.432 12:34:04 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:52.432 12:34:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:52.432 12:34:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.432 12:34:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.432 ************************************ 00:16:52.432 START TEST raid5f_state_function_test_sb 00:16:52.432 ************************************ 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83278 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83278' 00:16:52.432 Process raid pid: 83278 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83278 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83278 ']' 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.432 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.433 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.433 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.433 [2024-09-30 12:34:04.245379] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:52.433 [2024-09-30 12:34:04.245498] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.692 [2024-09-30 12:34:04.409883] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.951 [2024-09-30 12:34:04.601799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.951 [2024-09-30 12:34:04.798203] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.951 [2024-09-30 12:34:04.798240] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.210 [2024-09-30 12:34:05.058163] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.210 [2024-09-30 12:34:05.058216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.210 [2024-09-30 12:34:05.058226] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.210 [2024-09-30 12:34:05.058236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.210 [2024-09-30 12:34:05.058242] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.210 [2024-09-30 12:34:05.058251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.210 [2024-09-30 12:34:05.058257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:53.210 [2024-09-30 12:34:05.058264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.210 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.469 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.469 "name": "Existed_Raid", 00:16:53.469 "uuid": "a29209d3-8539-49bf-9035-1b1cc522fd78", 00:16:53.469 "strip_size_kb": 64, 00:16:53.469 "state": "configuring", 00:16:53.469 "raid_level": "raid5f", 00:16:53.469 "superblock": true, 00:16:53.469 "num_base_bdevs": 4, 00:16:53.469 "num_base_bdevs_discovered": 0, 00:16:53.469 "num_base_bdevs_operational": 4, 00:16:53.469 "base_bdevs_list": [ 00:16:53.469 { 00:16:53.469 "name": "BaseBdev1", 00:16:53.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.469 "is_configured": false, 00:16:53.469 "data_offset": 0, 00:16:53.469 "data_size": 0 00:16:53.469 }, 00:16:53.469 { 00:16:53.469 "name": "BaseBdev2", 00:16:53.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.469 "is_configured": false, 00:16:53.469 "data_offset": 0, 00:16:53.469 "data_size": 0 00:16:53.469 }, 00:16:53.469 { 00:16:53.469 "name": "BaseBdev3", 00:16:53.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.469 "is_configured": false, 00:16:53.469 "data_offset": 0, 00:16:53.469 "data_size": 0 00:16:53.469 }, 00:16:53.469 { 00:16:53.469 "name": "BaseBdev4", 00:16:53.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.469 "is_configured": false, 00:16:53.469 "data_offset": 0, 00:16:53.469 "data_size": 0 00:16:53.469 } 00:16:53.469 ] 00:16:53.469 }' 00:16:53.469 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.469 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.728 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:53.728 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.728 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.728 [2024-09-30 12:34:05.493336] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.728 [2024-09-30 12:34:05.493430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:53.728 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.728 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:53.728 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.728 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.728 [2024-09-30 12:34:05.505340] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.728 [2024-09-30 12:34:05.505421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.728 [2024-09-30 12:34:05.505446] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.728 [2024-09-30 12:34:05.505466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.728 [2024-09-30 12:34:05.505482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.728 [2024-09-30 12:34:05.505501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.728 [2024-09-30 12:34:05.505518] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:53.729 [2024-09-30 12:34:05.505536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.729 [2024-09-30 12:34:05.585778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.729 BaseBdev1 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.729 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.729 [ 00:16:53.729 { 00:16:53.729 "name": "BaseBdev1", 00:16:53.729 "aliases": [ 00:16:53.729 "4052fa39-e1d9-4e8a-bacf-a85855eb617f" 00:16:53.729 ], 00:16:53.729 "product_name": "Malloc disk", 00:16:53.729 "block_size": 512, 00:16:53.729 "num_blocks": 65536, 00:16:53.729 "uuid": "4052fa39-e1d9-4e8a-bacf-a85855eb617f", 00:16:53.729 "assigned_rate_limits": { 00:16:53.729 "rw_ios_per_sec": 0, 00:16:53.729 "rw_mbytes_per_sec": 0, 00:16:53.729 "r_mbytes_per_sec": 0, 00:16:53.729 "w_mbytes_per_sec": 0 00:16:53.729 }, 00:16:53.729 "claimed": true, 00:16:53.729 "claim_type": "exclusive_write", 00:16:53.729 "zoned": false, 00:16:53.729 "supported_io_types": { 00:16:53.729 "read": true, 00:16:53.729 "write": true, 00:16:53.729 "unmap": true, 00:16:53.729 "flush": true, 00:16:53.729 "reset": true, 00:16:53.729 "nvme_admin": false, 00:16:53.729 "nvme_io": false, 00:16:53.729 "nvme_io_md": false, 00:16:53.729 "write_zeroes": true, 00:16:53.729 "zcopy": true, 00:16:53.729 "get_zone_info": false, 00:16:53.729 "zone_management": false, 00:16:53.729 "zone_append": false, 00:16:53.729 "compare": false, 00:16:53.729 "compare_and_write": false, 00:16:53.729 "abort": true, 00:16:53.729 "seek_hole": false, 00:16:53.729 "seek_data": false, 00:16:53.729 "copy": true, 00:16:53.729 "nvme_iov_md": false 00:16:53.729 }, 00:16:53.729 "memory_domains": [ 00:16:53.729 { 00:16:53.729 "dma_device_id": "system", 00:16:53.729 "dma_device_type": 1 00:16:53.729 }, 00:16:53.729 { 00:16:53.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.729 "dma_device_type": 2 00:16:53.988 } 00:16:53.988 ], 00:16:53.988 "driver_specific": {} 00:16:53.988 } 00:16:53.988 ] 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.988 "name": "Existed_Raid", 00:16:53.988 "uuid": "3e9ef817-cabf-412f-9581-64404d883327", 00:16:53.988 "strip_size_kb": 64, 00:16:53.988 "state": "configuring", 00:16:53.988 "raid_level": "raid5f", 00:16:53.988 "superblock": true, 00:16:53.988 "num_base_bdevs": 4, 00:16:53.988 "num_base_bdevs_discovered": 1, 00:16:53.988 "num_base_bdevs_operational": 4, 00:16:53.988 "base_bdevs_list": [ 00:16:53.988 { 00:16:53.988 "name": "BaseBdev1", 00:16:53.988 "uuid": "4052fa39-e1d9-4e8a-bacf-a85855eb617f", 00:16:53.988 "is_configured": true, 00:16:53.988 "data_offset": 2048, 00:16:53.988 "data_size": 63488 00:16:53.988 }, 00:16:53.988 { 00:16:53.988 "name": "BaseBdev2", 00:16:53.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.988 "is_configured": false, 00:16:53.988 "data_offset": 0, 00:16:53.988 "data_size": 0 00:16:53.988 }, 00:16:53.988 { 00:16:53.988 "name": "BaseBdev3", 00:16:53.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.988 "is_configured": false, 00:16:53.988 "data_offset": 0, 00:16:53.988 "data_size": 0 00:16:53.988 }, 00:16:53.988 { 00:16:53.988 "name": "BaseBdev4", 00:16:53.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.988 "is_configured": false, 00:16:53.988 "data_offset": 0, 00:16:53.988 "data_size": 0 00:16:53.988 } 00:16:53.988 ] 00:16:53.988 }' 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.988 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.248 [2024-09-30 12:34:06.040980] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.248 [2024-09-30 12:34:06.041019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.248 [2024-09-30 12:34:06.053014] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.248 [2024-09-30 12:34:06.054671] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.248 [2024-09-30 12:34:06.054717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.248 [2024-09-30 12:34:06.054726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.248 [2024-09-30 12:34:06.054735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.248 [2024-09-30 12:34:06.054753] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:54.248 [2024-09-30 12:34:06.054762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.248 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.249 "name": "Existed_Raid", 00:16:54.249 "uuid": "17631556-1d4b-41c7-9abb-5c82f0c654be", 00:16:54.249 "strip_size_kb": 64, 00:16:54.249 "state": "configuring", 00:16:54.249 "raid_level": "raid5f", 00:16:54.249 "superblock": true, 00:16:54.249 "num_base_bdevs": 4, 00:16:54.249 "num_base_bdevs_discovered": 1, 00:16:54.249 "num_base_bdevs_operational": 4, 00:16:54.249 "base_bdevs_list": [ 00:16:54.249 { 00:16:54.249 "name": "BaseBdev1", 00:16:54.249 "uuid": "4052fa39-e1d9-4e8a-bacf-a85855eb617f", 00:16:54.249 "is_configured": true, 00:16:54.249 "data_offset": 2048, 00:16:54.249 "data_size": 63488 00:16:54.249 }, 00:16:54.249 { 00:16:54.249 "name": "BaseBdev2", 00:16:54.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.249 "is_configured": false, 00:16:54.249 "data_offset": 0, 00:16:54.249 "data_size": 0 00:16:54.249 }, 00:16:54.249 { 00:16:54.249 "name": "BaseBdev3", 00:16:54.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.249 "is_configured": false, 00:16:54.249 "data_offset": 0, 00:16:54.249 "data_size": 0 00:16:54.249 }, 00:16:54.249 { 00:16:54.249 "name": "BaseBdev4", 00:16:54.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.249 "is_configured": false, 00:16:54.249 "data_offset": 0, 00:16:54.249 "data_size": 0 00:16:54.249 } 00:16:54.249 ] 00:16:54.249 }' 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.249 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.819 [2024-09-30 12:34:06.547560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.819 BaseBdev2 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.819 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.820 [ 00:16:54.820 { 00:16:54.820 "name": "BaseBdev2", 00:16:54.820 "aliases": [ 00:16:54.820 "63a3a3e3-2810-4733-9a45-ab57f57e6e58" 00:16:54.820 ], 00:16:54.820 "product_name": "Malloc disk", 00:16:54.820 "block_size": 512, 00:16:54.820 "num_blocks": 65536, 00:16:54.820 "uuid": "63a3a3e3-2810-4733-9a45-ab57f57e6e58", 00:16:54.820 "assigned_rate_limits": { 00:16:54.820 "rw_ios_per_sec": 0, 00:16:54.820 "rw_mbytes_per_sec": 0, 00:16:54.820 "r_mbytes_per_sec": 0, 00:16:54.820 "w_mbytes_per_sec": 0 00:16:54.820 }, 00:16:54.820 "claimed": true, 00:16:54.820 "claim_type": "exclusive_write", 00:16:54.820 "zoned": false, 00:16:54.820 "supported_io_types": { 00:16:54.820 "read": true, 00:16:54.820 "write": true, 00:16:54.820 "unmap": true, 00:16:54.820 "flush": true, 00:16:54.820 "reset": true, 00:16:54.820 "nvme_admin": false, 00:16:54.820 "nvme_io": false, 00:16:54.820 "nvme_io_md": false, 00:16:54.820 "write_zeroes": true, 00:16:54.820 "zcopy": true, 00:16:54.820 "get_zone_info": false, 00:16:54.820 "zone_management": false, 00:16:54.820 "zone_append": false, 00:16:54.820 "compare": false, 00:16:54.820 "compare_and_write": false, 00:16:54.820 "abort": true, 00:16:54.820 "seek_hole": false, 00:16:54.820 "seek_data": false, 00:16:54.820 "copy": true, 00:16:54.820 "nvme_iov_md": false 00:16:54.820 }, 00:16:54.820 "memory_domains": [ 00:16:54.820 { 00:16:54.820 "dma_device_id": "system", 00:16:54.820 "dma_device_type": 1 00:16:54.820 }, 00:16:54.820 { 00:16:54.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.820 "dma_device_type": 2 00:16:54.820 } 00:16:54.820 ], 00:16:54.820 "driver_specific": {} 00:16:54.820 } 00:16:54.820 ] 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.820 "name": "Existed_Raid", 00:16:54.820 "uuid": "17631556-1d4b-41c7-9abb-5c82f0c654be", 00:16:54.820 "strip_size_kb": 64, 00:16:54.820 "state": "configuring", 00:16:54.820 "raid_level": "raid5f", 00:16:54.820 "superblock": true, 00:16:54.820 "num_base_bdevs": 4, 00:16:54.820 "num_base_bdevs_discovered": 2, 00:16:54.820 "num_base_bdevs_operational": 4, 00:16:54.820 "base_bdevs_list": [ 00:16:54.820 { 00:16:54.820 "name": "BaseBdev1", 00:16:54.820 "uuid": "4052fa39-e1d9-4e8a-bacf-a85855eb617f", 00:16:54.820 "is_configured": true, 00:16:54.820 "data_offset": 2048, 00:16:54.820 "data_size": 63488 00:16:54.820 }, 00:16:54.820 { 00:16:54.820 "name": "BaseBdev2", 00:16:54.820 "uuid": "63a3a3e3-2810-4733-9a45-ab57f57e6e58", 00:16:54.820 "is_configured": true, 00:16:54.820 "data_offset": 2048, 00:16:54.820 "data_size": 63488 00:16:54.820 }, 00:16:54.820 { 00:16:54.820 "name": "BaseBdev3", 00:16:54.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.820 "is_configured": false, 00:16:54.820 "data_offset": 0, 00:16:54.820 "data_size": 0 00:16:54.820 }, 00:16:54.820 { 00:16:54.820 "name": "BaseBdev4", 00:16:54.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.820 "is_configured": false, 00:16:54.820 "data_offset": 0, 00:16:54.820 "data_size": 0 00:16:54.820 } 00:16:54.820 ] 00:16:54.820 }' 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.820 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.390 [2024-09-30 12:34:07.087630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:55.390 BaseBdev3 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.390 [ 00:16:55.390 { 00:16:55.390 "name": "BaseBdev3", 00:16:55.390 "aliases": [ 00:16:55.390 "fb6ecedd-1ab1-41d8-9249-d764a700ae15" 00:16:55.390 ], 00:16:55.390 "product_name": "Malloc disk", 00:16:55.390 "block_size": 512, 00:16:55.390 "num_blocks": 65536, 00:16:55.390 "uuid": "fb6ecedd-1ab1-41d8-9249-d764a700ae15", 00:16:55.390 "assigned_rate_limits": { 00:16:55.390 "rw_ios_per_sec": 0, 00:16:55.390 "rw_mbytes_per_sec": 0, 00:16:55.390 "r_mbytes_per_sec": 0, 00:16:55.390 "w_mbytes_per_sec": 0 00:16:55.390 }, 00:16:55.390 "claimed": true, 00:16:55.390 "claim_type": "exclusive_write", 00:16:55.390 "zoned": false, 00:16:55.390 "supported_io_types": { 00:16:55.390 "read": true, 00:16:55.390 "write": true, 00:16:55.390 "unmap": true, 00:16:55.390 "flush": true, 00:16:55.390 "reset": true, 00:16:55.390 "nvme_admin": false, 00:16:55.390 "nvme_io": false, 00:16:55.390 "nvme_io_md": false, 00:16:55.390 "write_zeroes": true, 00:16:55.390 "zcopy": true, 00:16:55.390 "get_zone_info": false, 00:16:55.390 "zone_management": false, 00:16:55.390 "zone_append": false, 00:16:55.390 "compare": false, 00:16:55.390 "compare_and_write": false, 00:16:55.390 "abort": true, 00:16:55.390 "seek_hole": false, 00:16:55.390 "seek_data": false, 00:16:55.390 "copy": true, 00:16:55.390 "nvme_iov_md": false 00:16:55.390 }, 00:16:55.390 "memory_domains": [ 00:16:55.390 { 00:16:55.390 "dma_device_id": "system", 00:16:55.390 "dma_device_type": 1 00:16:55.390 }, 00:16:55.390 { 00:16:55.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.390 "dma_device_type": 2 00:16:55.390 } 00:16:55.390 ], 00:16:55.390 "driver_specific": {} 00:16:55.390 } 00:16:55.390 ] 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.390 "name": "Existed_Raid", 00:16:55.390 "uuid": "17631556-1d4b-41c7-9abb-5c82f0c654be", 00:16:55.390 "strip_size_kb": 64, 00:16:55.390 "state": "configuring", 00:16:55.390 "raid_level": "raid5f", 00:16:55.390 "superblock": true, 00:16:55.390 "num_base_bdevs": 4, 00:16:55.390 "num_base_bdevs_discovered": 3, 00:16:55.390 "num_base_bdevs_operational": 4, 00:16:55.390 "base_bdevs_list": [ 00:16:55.390 { 00:16:55.390 "name": "BaseBdev1", 00:16:55.390 "uuid": "4052fa39-e1d9-4e8a-bacf-a85855eb617f", 00:16:55.390 "is_configured": true, 00:16:55.390 "data_offset": 2048, 00:16:55.390 "data_size": 63488 00:16:55.390 }, 00:16:55.390 { 00:16:55.390 "name": "BaseBdev2", 00:16:55.390 "uuid": "63a3a3e3-2810-4733-9a45-ab57f57e6e58", 00:16:55.390 "is_configured": true, 00:16:55.390 "data_offset": 2048, 00:16:55.390 "data_size": 63488 00:16:55.390 }, 00:16:55.390 { 00:16:55.390 "name": "BaseBdev3", 00:16:55.390 "uuid": "fb6ecedd-1ab1-41d8-9249-d764a700ae15", 00:16:55.390 "is_configured": true, 00:16:55.390 "data_offset": 2048, 00:16:55.390 "data_size": 63488 00:16:55.390 }, 00:16:55.390 { 00:16:55.390 "name": "BaseBdev4", 00:16:55.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.390 "is_configured": false, 00:16:55.390 "data_offset": 0, 00:16:55.390 "data_size": 0 00:16:55.390 } 00:16:55.390 ] 00:16:55.390 }' 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.390 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.961 [2024-09-30 12:34:07.611830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:55.961 [2024-09-30 12:34:07.612083] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:55.961 [2024-09-30 12:34:07.612101] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:55.961 [2024-09-30 12:34:07.612342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:55.961 BaseBdev4 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.961 [2024-09-30 12:34:07.619784] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:55.961 [2024-09-30 12:34:07.619810] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:55.961 [2024-09-30 12:34:07.620043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.961 [ 00:16:55.961 { 00:16:55.961 "name": "BaseBdev4", 00:16:55.961 "aliases": [ 00:16:55.961 "66625d9a-17e1-49ad-94ba-c9140265cf2b" 00:16:55.961 ], 00:16:55.961 "product_name": "Malloc disk", 00:16:55.961 "block_size": 512, 00:16:55.961 "num_blocks": 65536, 00:16:55.961 "uuid": "66625d9a-17e1-49ad-94ba-c9140265cf2b", 00:16:55.961 "assigned_rate_limits": { 00:16:55.961 "rw_ios_per_sec": 0, 00:16:55.961 "rw_mbytes_per_sec": 0, 00:16:55.961 "r_mbytes_per_sec": 0, 00:16:55.961 "w_mbytes_per_sec": 0 00:16:55.961 }, 00:16:55.961 "claimed": true, 00:16:55.961 "claim_type": "exclusive_write", 00:16:55.961 "zoned": false, 00:16:55.961 "supported_io_types": { 00:16:55.961 "read": true, 00:16:55.961 "write": true, 00:16:55.961 "unmap": true, 00:16:55.961 "flush": true, 00:16:55.961 "reset": true, 00:16:55.961 "nvme_admin": false, 00:16:55.961 "nvme_io": false, 00:16:55.961 "nvme_io_md": false, 00:16:55.961 "write_zeroes": true, 00:16:55.961 "zcopy": true, 00:16:55.961 "get_zone_info": false, 00:16:55.961 "zone_management": false, 00:16:55.961 "zone_append": false, 00:16:55.961 "compare": false, 00:16:55.961 "compare_and_write": false, 00:16:55.961 "abort": true, 00:16:55.961 "seek_hole": false, 00:16:55.961 "seek_data": false, 00:16:55.961 "copy": true, 00:16:55.961 "nvme_iov_md": false 00:16:55.961 }, 00:16:55.961 "memory_domains": [ 00:16:55.961 { 00:16:55.961 "dma_device_id": "system", 00:16:55.961 "dma_device_type": 1 00:16:55.961 }, 00:16:55.961 { 00:16:55.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.961 "dma_device_type": 2 00:16:55.961 } 00:16:55.961 ], 00:16:55.961 "driver_specific": {} 00:16:55.961 } 00:16:55.961 ] 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.961 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.961 "name": "Existed_Raid", 00:16:55.961 "uuid": "17631556-1d4b-41c7-9abb-5c82f0c654be", 00:16:55.961 "strip_size_kb": 64, 00:16:55.961 "state": "online", 00:16:55.961 "raid_level": "raid5f", 00:16:55.961 "superblock": true, 00:16:55.961 "num_base_bdevs": 4, 00:16:55.961 "num_base_bdevs_discovered": 4, 00:16:55.961 "num_base_bdevs_operational": 4, 00:16:55.961 "base_bdevs_list": [ 00:16:55.961 { 00:16:55.961 "name": "BaseBdev1", 00:16:55.961 "uuid": "4052fa39-e1d9-4e8a-bacf-a85855eb617f", 00:16:55.961 "is_configured": true, 00:16:55.961 "data_offset": 2048, 00:16:55.961 "data_size": 63488 00:16:55.961 }, 00:16:55.961 { 00:16:55.961 "name": "BaseBdev2", 00:16:55.961 "uuid": "63a3a3e3-2810-4733-9a45-ab57f57e6e58", 00:16:55.961 "is_configured": true, 00:16:55.961 "data_offset": 2048, 00:16:55.961 "data_size": 63488 00:16:55.961 }, 00:16:55.961 { 00:16:55.961 "name": "BaseBdev3", 00:16:55.961 "uuid": "fb6ecedd-1ab1-41d8-9249-d764a700ae15", 00:16:55.961 "is_configured": true, 00:16:55.961 "data_offset": 2048, 00:16:55.961 "data_size": 63488 00:16:55.961 }, 00:16:55.961 { 00:16:55.961 "name": "BaseBdev4", 00:16:55.961 "uuid": "66625d9a-17e1-49ad-94ba-c9140265cf2b", 00:16:55.961 "is_configured": true, 00:16:55.961 "data_offset": 2048, 00:16:55.961 "data_size": 63488 00:16:55.961 } 00:16:55.961 ] 00:16:55.962 }' 00:16:55.962 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.962 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.222 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:56.222 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:56.222 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:56.222 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:56.222 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:56.222 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:56.222 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:56.222 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.222 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.222 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:56.222 [2024-09-30 12:34:08.111701] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.481 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.481 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:56.481 "name": "Existed_Raid", 00:16:56.481 "aliases": [ 00:16:56.481 "17631556-1d4b-41c7-9abb-5c82f0c654be" 00:16:56.481 ], 00:16:56.481 "product_name": "Raid Volume", 00:16:56.481 "block_size": 512, 00:16:56.481 "num_blocks": 190464, 00:16:56.481 "uuid": "17631556-1d4b-41c7-9abb-5c82f0c654be", 00:16:56.481 "assigned_rate_limits": { 00:16:56.481 "rw_ios_per_sec": 0, 00:16:56.481 "rw_mbytes_per_sec": 0, 00:16:56.481 "r_mbytes_per_sec": 0, 00:16:56.481 "w_mbytes_per_sec": 0 00:16:56.482 }, 00:16:56.482 "claimed": false, 00:16:56.482 "zoned": false, 00:16:56.482 "supported_io_types": { 00:16:56.482 "read": true, 00:16:56.482 "write": true, 00:16:56.482 "unmap": false, 00:16:56.482 "flush": false, 00:16:56.482 "reset": true, 00:16:56.482 "nvme_admin": false, 00:16:56.482 "nvme_io": false, 00:16:56.482 "nvme_io_md": false, 00:16:56.482 "write_zeroes": true, 00:16:56.482 "zcopy": false, 00:16:56.482 "get_zone_info": false, 00:16:56.482 "zone_management": false, 00:16:56.482 "zone_append": false, 00:16:56.482 "compare": false, 00:16:56.482 "compare_and_write": false, 00:16:56.482 "abort": false, 00:16:56.482 "seek_hole": false, 00:16:56.482 "seek_data": false, 00:16:56.482 "copy": false, 00:16:56.482 "nvme_iov_md": false 00:16:56.482 }, 00:16:56.482 "driver_specific": { 00:16:56.482 "raid": { 00:16:56.482 "uuid": "17631556-1d4b-41c7-9abb-5c82f0c654be", 00:16:56.482 "strip_size_kb": 64, 00:16:56.482 "state": "online", 00:16:56.482 "raid_level": "raid5f", 00:16:56.482 "superblock": true, 00:16:56.482 "num_base_bdevs": 4, 00:16:56.482 "num_base_bdevs_discovered": 4, 00:16:56.482 "num_base_bdevs_operational": 4, 00:16:56.482 "base_bdevs_list": [ 00:16:56.482 { 00:16:56.482 "name": "BaseBdev1", 00:16:56.482 "uuid": "4052fa39-e1d9-4e8a-bacf-a85855eb617f", 00:16:56.482 "is_configured": true, 00:16:56.482 "data_offset": 2048, 00:16:56.482 "data_size": 63488 00:16:56.482 }, 00:16:56.482 { 00:16:56.482 "name": "BaseBdev2", 00:16:56.482 "uuid": "63a3a3e3-2810-4733-9a45-ab57f57e6e58", 00:16:56.482 "is_configured": true, 00:16:56.482 "data_offset": 2048, 00:16:56.482 "data_size": 63488 00:16:56.482 }, 00:16:56.482 { 00:16:56.482 "name": "BaseBdev3", 00:16:56.482 "uuid": "fb6ecedd-1ab1-41d8-9249-d764a700ae15", 00:16:56.482 "is_configured": true, 00:16:56.482 "data_offset": 2048, 00:16:56.482 "data_size": 63488 00:16:56.482 }, 00:16:56.482 { 00:16:56.482 "name": "BaseBdev4", 00:16:56.482 "uuid": "66625d9a-17e1-49ad-94ba-c9140265cf2b", 00:16:56.482 "is_configured": true, 00:16:56.482 "data_offset": 2048, 00:16:56.482 "data_size": 63488 00:16:56.482 } 00:16:56.482 ] 00:16:56.482 } 00:16:56.482 } 00:16:56.482 }' 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:56.482 BaseBdev2 00:16:56.482 BaseBdev3 00:16:56.482 BaseBdev4' 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.482 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.742 [2024-09-30 12:34:08.431170] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.742 "name": "Existed_Raid", 00:16:56.742 "uuid": "17631556-1d4b-41c7-9abb-5c82f0c654be", 00:16:56.742 "strip_size_kb": 64, 00:16:56.742 "state": "online", 00:16:56.742 "raid_level": "raid5f", 00:16:56.742 "superblock": true, 00:16:56.742 "num_base_bdevs": 4, 00:16:56.742 "num_base_bdevs_discovered": 3, 00:16:56.742 "num_base_bdevs_operational": 3, 00:16:56.742 "base_bdevs_list": [ 00:16:56.742 { 00:16:56.742 "name": null, 00:16:56.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.742 "is_configured": false, 00:16:56.742 "data_offset": 0, 00:16:56.742 "data_size": 63488 00:16:56.742 }, 00:16:56.742 { 00:16:56.742 "name": "BaseBdev2", 00:16:56.742 "uuid": "63a3a3e3-2810-4733-9a45-ab57f57e6e58", 00:16:56.742 "is_configured": true, 00:16:56.742 "data_offset": 2048, 00:16:56.742 "data_size": 63488 00:16:56.742 }, 00:16:56.742 { 00:16:56.742 "name": "BaseBdev3", 00:16:56.742 "uuid": "fb6ecedd-1ab1-41d8-9249-d764a700ae15", 00:16:56.742 "is_configured": true, 00:16:56.742 "data_offset": 2048, 00:16:56.742 "data_size": 63488 00:16:56.742 }, 00:16:56.742 { 00:16:56.742 "name": "BaseBdev4", 00:16:56.742 "uuid": "66625d9a-17e1-49ad-94ba-c9140265cf2b", 00:16:56.742 "is_configured": true, 00:16:56.742 "data_offset": 2048, 00:16:56.742 "data_size": 63488 00:16:56.742 } 00:16:56.742 ] 00:16:56.742 }' 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.742 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.310 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:57.310 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.310 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.310 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.310 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:57.310 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.310 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.311 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:57.311 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.311 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:57.311 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.311 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.311 [2024-09-30 12:34:09.001325] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:57.311 [2024-09-30 12:34:09.001480] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.311 [2024-09-30 12:34:09.090585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.311 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.311 [2024-09-30 12:34:09.130517] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.569 [2024-09-30 12:34:09.257423] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:57.569 [2024-09-30 12:34:09.257475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.569 BaseBdev2 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.569 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.569 [ 00:16:57.569 { 00:16:57.569 "name": "BaseBdev2", 00:16:57.569 "aliases": [ 00:16:57.569 "a35b7fd0-0184-4075-8649-65b16edf0fcf" 00:16:57.569 ], 00:16:57.569 "product_name": "Malloc disk", 00:16:57.569 "block_size": 512, 00:16:57.569 "num_blocks": 65536, 00:16:57.569 "uuid": "a35b7fd0-0184-4075-8649-65b16edf0fcf", 00:16:57.569 "assigned_rate_limits": { 00:16:57.569 "rw_ios_per_sec": 0, 00:16:57.829 "rw_mbytes_per_sec": 0, 00:16:57.829 "r_mbytes_per_sec": 0, 00:16:57.829 "w_mbytes_per_sec": 0 00:16:57.829 }, 00:16:57.829 "claimed": false, 00:16:57.829 "zoned": false, 00:16:57.829 "supported_io_types": { 00:16:57.829 "read": true, 00:16:57.829 "write": true, 00:16:57.829 "unmap": true, 00:16:57.829 "flush": true, 00:16:57.829 "reset": true, 00:16:57.829 "nvme_admin": false, 00:16:57.829 "nvme_io": false, 00:16:57.829 "nvme_io_md": false, 00:16:57.829 "write_zeroes": true, 00:16:57.829 "zcopy": true, 00:16:57.829 "get_zone_info": false, 00:16:57.829 "zone_management": false, 00:16:57.829 "zone_append": false, 00:16:57.829 "compare": false, 00:16:57.829 "compare_and_write": false, 00:16:57.829 "abort": true, 00:16:57.829 "seek_hole": false, 00:16:57.829 "seek_data": false, 00:16:57.829 "copy": true, 00:16:57.829 "nvme_iov_md": false 00:16:57.829 }, 00:16:57.829 "memory_domains": [ 00:16:57.829 { 00:16:57.829 "dma_device_id": "system", 00:16:57.829 "dma_device_type": 1 00:16:57.829 }, 00:16:57.829 { 00:16:57.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.829 "dma_device_type": 2 00:16:57.829 } 00:16:57.829 ], 00:16:57.829 "driver_specific": {} 00:16:57.829 } 00:16:57.829 ] 00:16:57.829 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.829 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:57.829 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:57.829 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 BaseBdev3 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 [ 00:16:57.830 { 00:16:57.830 "name": "BaseBdev3", 00:16:57.830 "aliases": [ 00:16:57.830 "b46be4c7-9db0-4881-920a-ea02ca50dd50" 00:16:57.830 ], 00:16:57.830 "product_name": "Malloc disk", 00:16:57.830 "block_size": 512, 00:16:57.830 "num_blocks": 65536, 00:16:57.830 "uuid": "b46be4c7-9db0-4881-920a-ea02ca50dd50", 00:16:57.830 "assigned_rate_limits": { 00:16:57.830 "rw_ios_per_sec": 0, 00:16:57.830 "rw_mbytes_per_sec": 0, 00:16:57.830 "r_mbytes_per_sec": 0, 00:16:57.830 "w_mbytes_per_sec": 0 00:16:57.830 }, 00:16:57.830 "claimed": false, 00:16:57.830 "zoned": false, 00:16:57.830 "supported_io_types": { 00:16:57.830 "read": true, 00:16:57.830 "write": true, 00:16:57.830 "unmap": true, 00:16:57.830 "flush": true, 00:16:57.830 "reset": true, 00:16:57.830 "nvme_admin": false, 00:16:57.830 "nvme_io": false, 00:16:57.830 "nvme_io_md": false, 00:16:57.830 "write_zeroes": true, 00:16:57.830 "zcopy": true, 00:16:57.830 "get_zone_info": false, 00:16:57.830 "zone_management": false, 00:16:57.830 "zone_append": false, 00:16:57.830 "compare": false, 00:16:57.830 "compare_and_write": false, 00:16:57.830 "abort": true, 00:16:57.830 "seek_hole": false, 00:16:57.830 "seek_data": false, 00:16:57.830 "copy": true, 00:16:57.830 "nvme_iov_md": false 00:16:57.830 }, 00:16:57.830 "memory_domains": [ 00:16:57.830 { 00:16:57.830 "dma_device_id": "system", 00:16:57.830 "dma_device_type": 1 00:16:57.830 }, 00:16:57.830 { 00:16:57.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.830 "dma_device_type": 2 00:16:57.830 } 00:16:57.830 ], 00:16:57.830 "driver_specific": {} 00:16:57.830 } 00:16:57.830 ] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 BaseBdev4 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 [ 00:16:57.830 { 00:16:57.830 "name": "BaseBdev4", 00:16:57.830 "aliases": [ 00:16:57.830 "2d89d4cb-155d-4410-ba23-814f7777bfbc" 00:16:57.830 ], 00:16:57.830 "product_name": "Malloc disk", 00:16:57.830 "block_size": 512, 00:16:57.830 "num_blocks": 65536, 00:16:57.830 "uuid": "2d89d4cb-155d-4410-ba23-814f7777bfbc", 00:16:57.830 "assigned_rate_limits": { 00:16:57.830 "rw_ios_per_sec": 0, 00:16:57.830 "rw_mbytes_per_sec": 0, 00:16:57.830 "r_mbytes_per_sec": 0, 00:16:57.830 "w_mbytes_per_sec": 0 00:16:57.830 }, 00:16:57.830 "claimed": false, 00:16:57.830 "zoned": false, 00:16:57.830 "supported_io_types": { 00:16:57.830 "read": true, 00:16:57.830 "write": true, 00:16:57.830 "unmap": true, 00:16:57.830 "flush": true, 00:16:57.830 "reset": true, 00:16:57.830 "nvme_admin": false, 00:16:57.830 "nvme_io": false, 00:16:57.830 "nvme_io_md": false, 00:16:57.830 "write_zeroes": true, 00:16:57.830 "zcopy": true, 00:16:57.830 "get_zone_info": false, 00:16:57.830 "zone_management": false, 00:16:57.830 "zone_append": false, 00:16:57.830 "compare": false, 00:16:57.830 "compare_and_write": false, 00:16:57.830 "abort": true, 00:16:57.830 "seek_hole": false, 00:16:57.830 "seek_data": false, 00:16:57.830 "copy": true, 00:16:57.830 "nvme_iov_md": false 00:16:57.830 }, 00:16:57.830 "memory_domains": [ 00:16:57.830 { 00:16:57.830 "dma_device_id": "system", 00:16:57.830 "dma_device_type": 1 00:16:57.830 }, 00:16:57.830 { 00:16:57.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.830 "dma_device_type": 2 00:16:57.830 } 00:16:57.830 ], 00:16:57.830 "driver_specific": {} 00:16:57.830 } 00:16:57.830 ] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 [2024-09-30 12:34:09.634885] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:57.830 [2024-09-30 12:34:09.634939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:57.830 [2024-09-30 12:34:09.634960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.830 [2024-09-30 12:34:09.636575] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.830 [2024-09-30 12:34:09.636629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.830 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.830 "name": "Existed_Raid", 00:16:57.830 "uuid": "9d75aa84-d531-4d19-a00d-f82a8fa5b25c", 00:16:57.830 "strip_size_kb": 64, 00:16:57.830 "state": "configuring", 00:16:57.830 "raid_level": "raid5f", 00:16:57.830 "superblock": true, 00:16:57.830 "num_base_bdevs": 4, 00:16:57.830 "num_base_bdevs_discovered": 3, 00:16:57.831 "num_base_bdevs_operational": 4, 00:16:57.831 "base_bdevs_list": [ 00:16:57.831 { 00:16:57.831 "name": "BaseBdev1", 00:16:57.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.831 "is_configured": false, 00:16:57.831 "data_offset": 0, 00:16:57.831 "data_size": 0 00:16:57.831 }, 00:16:57.831 { 00:16:57.831 "name": "BaseBdev2", 00:16:57.831 "uuid": "a35b7fd0-0184-4075-8649-65b16edf0fcf", 00:16:57.831 "is_configured": true, 00:16:57.831 "data_offset": 2048, 00:16:57.831 "data_size": 63488 00:16:57.831 }, 00:16:57.831 { 00:16:57.831 "name": "BaseBdev3", 00:16:57.831 "uuid": "b46be4c7-9db0-4881-920a-ea02ca50dd50", 00:16:57.831 "is_configured": true, 00:16:57.831 "data_offset": 2048, 00:16:57.831 "data_size": 63488 00:16:57.831 }, 00:16:57.831 { 00:16:57.831 "name": "BaseBdev4", 00:16:57.831 "uuid": "2d89d4cb-155d-4410-ba23-814f7777bfbc", 00:16:57.831 "is_configured": true, 00:16:57.831 "data_offset": 2048, 00:16:57.831 "data_size": 63488 00:16:57.831 } 00:16:57.831 ] 00:16:57.831 }' 00:16:57.831 12:34:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.831 12:34:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.400 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:58.400 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.400 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.400 [2024-09-30 12:34:10.118125] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:58.400 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.400 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:58.400 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.401 "name": "Existed_Raid", 00:16:58.401 "uuid": "9d75aa84-d531-4d19-a00d-f82a8fa5b25c", 00:16:58.401 "strip_size_kb": 64, 00:16:58.401 "state": "configuring", 00:16:58.401 "raid_level": "raid5f", 00:16:58.401 "superblock": true, 00:16:58.401 "num_base_bdevs": 4, 00:16:58.401 "num_base_bdevs_discovered": 2, 00:16:58.401 "num_base_bdevs_operational": 4, 00:16:58.401 "base_bdevs_list": [ 00:16:58.401 { 00:16:58.401 "name": "BaseBdev1", 00:16:58.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.401 "is_configured": false, 00:16:58.401 "data_offset": 0, 00:16:58.401 "data_size": 0 00:16:58.401 }, 00:16:58.401 { 00:16:58.401 "name": null, 00:16:58.401 "uuid": "a35b7fd0-0184-4075-8649-65b16edf0fcf", 00:16:58.401 "is_configured": false, 00:16:58.401 "data_offset": 0, 00:16:58.401 "data_size": 63488 00:16:58.401 }, 00:16:58.401 { 00:16:58.401 "name": "BaseBdev3", 00:16:58.401 "uuid": "b46be4c7-9db0-4881-920a-ea02ca50dd50", 00:16:58.401 "is_configured": true, 00:16:58.401 "data_offset": 2048, 00:16:58.401 "data_size": 63488 00:16:58.401 }, 00:16:58.401 { 00:16:58.401 "name": "BaseBdev4", 00:16:58.401 "uuid": "2d89d4cb-155d-4410-ba23-814f7777bfbc", 00:16:58.401 "is_configured": true, 00:16:58.401 "data_offset": 2048, 00:16:58.401 "data_size": 63488 00:16:58.401 } 00:16:58.401 ] 00:16:58.401 }' 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.401 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.971 [2024-09-30 12:34:10.652186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.971 BaseBdev1 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.971 [ 00:16:58.971 { 00:16:58.971 "name": "BaseBdev1", 00:16:58.971 "aliases": [ 00:16:58.971 "00869134-e5d8-4415-9c06-54f44f57f03e" 00:16:58.971 ], 00:16:58.971 "product_name": "Malloc disk", 00:16:58.971 "block_size": 512, 00:16:58.971 "num_blocks": 65536, 00:16:58.971 "uuid": "00869134-e5d8-4415-9c06-54f44f57f03e", 00:16:58.971 "assigned_rate_limits": { 00:16:58.971 "rw_ios_per_sec": 0, 00:16:58.971 "rw_mbytes_per_sec": 0, 00:16:58.971 "r_mbytes_per_sec": 0, 00:16:58.971 "w_mbytes_per_sec": 0 00:16:58.971 }, 00:16:58.971 "claimed": true, 00:16:58.971 "claim_type": "exclusive_write", 00:16:58.971 "zoned": false, 00:16:58.971 "supported_io_types": { 00:16:58.971 "read": true, 00:16:58.971 "write": true, 00:16:58.971 "unmap": true, 00:16:58.971 "flush": true, 00:16:58.971 "reset": true, 00:16:58.971 "nvme_admin": false, 00:16:58.971 "nvme_io": false, 00:16:58.971 "nvme_io_md": false, 00:16:58.971 "write_zeroes": true, 00:16:58.971 "zcopy": true, 00:16:58.971 "get_zone_info": false, 00:16:58.971 "zone_management": false, 00:16:58.971 "zone_append": false, 00:16:58.971 "compare": false, 00:16:58.971 "compare_and_write": false, 00:16:58.971 "abort": true, 00:16:58.971 "seek_hole": false, 00:16:58.971 "seek_data": false, 00:16:58.971 "copy": true, 00:16:58.971 "nvme_iov_md": false 00:16:58.971 }, 00:16:58.971 "memory_domains": [ 00:16:58.971 { 00:16:58.971 "dma_device_id": "system", 00:16:58.971 "dma_device_type": 1 00:16:58.971 }, 00:16:58.971 { 00:16:58.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.971 "dma_device_type": 2 00:16:58.971 } 00:16:58.971 ], 00:16:58.971 "driver_specific": {} 00:16:58.971 } 00:16:58.971 ] 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.971 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.971 "name": "Existed_Raid", 00:16:58.971 "uuid": "9d75aa84-d531-4d19-a00d-f82a8fa5b25c", 00:16:58.971 "strip_size_kb": 64, 00:16:58.971 "state": "configuring", 00:16:58.971 "raid_level": "raid5f", 00:16:58.971 "superblock": true, 00:16:58.971 "num_base_bdevs": 4, 00:16:58.972 "num_base_bdevs_discovered": 3, 00:16:58.972 "num_base_bdevs_operational": 4, 00:16:58.972 "base_bdevs_list": [ 00:16:58.972 { 00:16:58.972 "name": "BaseBdev1", 00:16:58.972 "uuid": "00869134-e5d8-4415-9c06-54f44f57f03e", 00:16:58.972 "is_configured": true, 00:16:58.972 "data_offset": 2048, 00:16:58.972 "data_size": 63488 00:16:58.972 }, 00:16:58.972 { 00:16:58.972 "name": null, 00:16:58.972 "uuid": "a35b7fd0-0184-4075-8649-65b16edf0fcf", 00:16:58.972 "is_configured": false, 00:16:58.972 "data_offset": 0, 00:16:58.972 "data_size": 63488 00:16:58.972 }, 00:16:58.972 { 00:16:58.972 "name": "BaseBdev3", 00:16:58.972 "uuid": "b46be4c7-9db0-4881-920a-ea02ca50dd50", 00:16:58.972 "is_configured": true, 00:16:58.972 "data_offset": 2048, 00:16:58.972 "data_size": 63488 00:16:58.972 }, 00:16:58.972 { 00:16:58.972 "name": "BaseBdev4", 00:16:58.972 "uuid": "2d89d4cb-155d-4410-ba23-814f7777bfbc", 00:16:58.972 "is_configured": true, 00:16:58.972 "data_offset": 2048, 00:16:58.972 "data_size": 63488 00:16:58.972 } 00:16:58.972 ] 00:16:58.972 }' 00:16:58.972 12:34:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.972 12:34:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.541 [2024-09-30 12:34:11.199538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.541 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.542 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.542 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.542 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.542 "name": "Existed_Raid", 00:16:59.542 "uuid": "9d75aa84-d531-4d19-a00d-f82a8fa5b25c", 00:16:59.542 "strip_size_kb": 64, 00:16:59.542 "state": "configuring", 00:16:59.542 "raid_level": "raid5f", 00:16:59.542 "superblock": true, 00:16:59.542 "num_base_bdevs": 4, 00:16:59.542 "num_base_bdevs_discovered": 2, 00:16:59.542 "num_base_bdevs_operational": 4, 00:16:59.542 "base_bdevs_list": [ 00:16:59.542 { 00:16:59.542 "name": "BaseBdev1", 00:16:59.542 "uuid": "00869134-e5d8-4415-9c06-54f44f57f03e", 00:16:59.542 "is_configured": true, 00:16:59.542 "data_offset": 2048, 00:16:59.542 "data_size": 63488 00:16:59.542 }, 00:16:59.542 { 00:16:59.542 "name": null, 00:16:59.542 "uuid": "a35b7fd0-0184-4075-8649-65b16edf0fcf", 00:16:59.542 "is_configured": false, 00:16:59.542 "data_offset": 0, 00:16:59.542 "data_size": 63488 00:16:59.542 }, 00:16:59.542 { 00:16:59.542 "name": null, 00:16:59.542 "uuid": "b46be4c7-9db0-4881-920a-ea02ca50dd50", 00:16:59.542 "is_configured": false, 00:16:59.542 "data_offset": 0, 00:16:59.542 "data_size": 63488 00:16:59.542 }, 00:16:59.542 { 00:16:59.542 "name": "BaseBdev4", 00:16:59.542 "uuid": "2d89d4cb-155d-4410-ba23-814f7777bfbc", 00:16:59.542 "is_configured": true, 00:16:59.542 "data_offset": 2048, 00:16:59.542 "data_size": 63488 00:16:59.542 } 00:16:59.542 ] 00:16:59.542 }' 00:16:59.542 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.542 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.801 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.801 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.801 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:59.801 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.801 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.061 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:00.061 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:00.061 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.061 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.062 [2024-09-30 12:34:11.706661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.062 "name": "Existed_Raid", 00:17:00.062 "uuid": "9d75aa84-d531-4d19-a00d-f82a8fa5b25c", 00:17:00.062 "strip_size_kb": 64, 00:17:00.062 "state": "configuring", 00:17:00.062 "raid_level": "raid5f", 00:17:00.062 "superblock": true, 00:17:00.062 "num_base_bdevs": 4, 00:17:00.062 "num_base_bdevs_discovered": 3, 00:17:00.062 "num_base_bdevs_operational": 4, 00:17:00.062 "base_bdevs_list": [ 00:17:00.062 { 00:17:00.062 "name": "BaseBdev1", 00:17:00.062 "uuid": "00869134-e5d8-4415-9c06-54f44f57f03e", 00:17:00.062 "is_configured": true, 00:17:00.062 "data_offset": 2048, 00:17:00.062 "data_size": 63488 00:17:00.062 }, 00:17:00.062 { 00:17:00.062 "name": null, 00:17:00.062 "uuid": "a35b7fd0-0184-4075-8649-65b16edf0fcf", 00:17:00.062 "is_configured": false, 00:17:00.062 "data_offset": 0, 00:17:00.062 "data_size": 63488 00:17:00.062 }, 00:17:00.062 { 00:17:00.062 "name": "BaseBdev3", 00:17:00.062 "uuid": "b46be4c7-9db0-4881-920a-ea02ca50dd50", 00:17:00.062 "is_configured": true, 00:17:00.062 "data_offset": 2048, 00:17:00.062 "data_size": 63488 00:17:00.062 }, 00:17:00.062 { 00:17:00.062 "name": "BaseBdev4", 00:17:00.062 "uuid": "2d89d4cb-155d-4410-ba23-814f7777bfbc", 00:17:00.062 "is_configured": true, 00:17:00.062 "data_offset": 2048, 00:17:00.062 "data_size": 63488 00:17:00.062 } 00:17:00.062 ] 00:17:00.062 }' 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.062 12:34:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.322 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.322 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:00.322 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.322 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.322 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.322 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:00.322 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:00.322 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.322 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.322 [2024-09-30 12:34:12.169862] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.581 "name": "Existed_Raid", 00:17:00.581 "uuid": "9d75aa84-d531-4d19-a00d-f82a8fa5b25c", 00:17:00.581 "strip_size_kb": 64, 00:17:00.581 "state": "configuring", 00:17:00.581 "raid_level": "raid5f", 00:17:00.581 "superblock": true, 00:17:00.581 "num_base_bdevs": 4, 00:17:00.581 "num_base_bdevs_discovered": 2, 00:17:00.581 "num_base_bdevs_operational": 4, 00:17:00.581 "base_bdevs_list": [ 00:17:00.581 { 00:17:00.581 "name": null, 00:17:00.581 "uuid": "00869134-e5d8-4415-9c06-54f44f57f03e", 00:17:00.581 "is_configured": false, 00:17:00.581 "data_offset": 0, 00:17:00.581 "data_size": 63488 00:17:00.581 }, 00:17:00.581 { 00:17:00.581 "name": null, 00:17:00.581 "uuid": "a35b7fd0-0184-4075-8649-65b16edf0fcf", 00:17:00.581 "is_configured": false, 00:17:00.581 "data_offset": 0, 00:17:00.581 "data_size": 63488 00:17:00.581 }, 00:17:00.581 { 00:17:00.581 "name": "BaseBdev3", 00:17:00.581 "uuid": "b46be4c7-9db0-4881-920a-ea02ca50dd50", 00:17:00.581 "is_configured": true, 00:17:00.581 "data_offset": 2048, 00:17:00.581 "data_size": 63488 00:17:00.581 }, 00:17:00.581 { 00:17:00.581 "name": "BaseBdev4", 00:17:00.581 "uuid": "2d89d4cb-155d-4410-ba23-814f7777bfbc", 00:17:00.581 "is_configured": true, 00:17:00.581 "data_offset": 2048, 00:17:00.581 "data_size": 63488 00:17:00.581 } 00:17:00.581 ] 00:17:00.581 }' 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.581 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.840 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.840 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.840 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:00.840 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.840 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.101 [2024-09-30 12:34:12.760115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.101 "name": "Existed_Raid", 00:17:01.101 "uuid": "9d75aa84-d531-4d19-a00d-f82a8fa5b25c", 00:17:01.101 "strip_size_kb": 64, 00:17:01.101 "state": "configuring", 00:17:01.101 "raid_level": "raid5f", 00:17:01.101 "superblock": true, 00:17:01.101 "num_base_bdevs": 4, 00:17:01.101 "num_base_bdevs_discovered": 3, 00:17:01.101 "num_base_bdevs_operational": 4, 00:17:01.101 "base_bdevs_list": [ 00:17:01.101 { 00:17:01.101 "name": null, 00:17:01.101 "uuid": "00869134-e5d8-4415-9c06-54f44f57f03e", 00:17:01.101 "is_configured": false, 00:17:01.101 "data_offset": 0, 00:17:01.101 "data_size": 63488 00:17:01.101 }, 00:17:01.101 { 00:17:01.101 "name": "BaseBdev2", 00:17:01.101 "uuid": "a35b7fd0-0184-4075-8649-65b16edf0fcf", 00:17:01.101 "is_configured": true, 00:17:01.101 "data_offset": 2048, 00:17:01.101 "data_size": 63488 00:17:01.101 }, 00:17:01.101 { 00:17:01.101 "name": "BaseBdev3", 00:17:01.101 "uuid": "b46be4c7-9db0-4881-920a-ea02ca50dd50", 00:17:01.101 "is_configured": true, 00:17:01.101 "data_offset": 2048, 00:17:01.101 "data_size": 63488 00:17:01.101 }, 00:17:01.101 { 00:17:01.101 "name": "BaseBdev4", 00:17:01.101 "uuid": "2d89d4cb-155d-4410-ba23-814f7777bfbc", 00:17:01.101 "is_configured": true, 00:17:01.101 "data_offset": 2048, 00:17:01.101 "data_size": 63488 00:17:01.101 } 00:17:01.101 ] 00:17:01.101 }' 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.101 12:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.361 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.361 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.361 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.361 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:01.361 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.361 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:01.361 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.361 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:01.361 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.361 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.361 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.621 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 00869134-e5d8-4415-9c06-54f44f57f03e 00:17:01.621 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.621 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.621 [2024-09-30 12:34:13.313650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:01.621 [2024-09-30 12:34:13.313887] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:01.621 [2024-09-30 12:34:13.313902] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:01.621 [2024-09-30 12:34:13.314138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:01.621 NewBaseBdev 00:17:01.621 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.621 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:01.621 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:01.621 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:01.621 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:01.621 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.622 [2024-09-30 12:34:13.320407] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:01.622 [2024-09-30 12:34:13.320434] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:01.622 [2024-09-30 12:34:13.320578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.622 [ 00:17:01.622 { 00:17:01.622 "name": "NewBaseBdev", 00:17:01.622 "aliases": [ 00:17:01.622 "00869134-e5d8-4415-9c06-54f44f57f03e" 00:17:01.622 ], 00:17:01.622 "product_name": "Malloc disk", 00:17:01.622 "block_size": 512, 00:17:01.622 "num_blocks": 65536, 00:17:01.622 "uuid": "00869134-e5d8-4415-9c06-54f44f57f03e", 00:17:01.622 "assigned_rate_limits": { 00:17:01.622 "rw_ios_per_sec": 0, 00:17:01.622 "rw_mbytes_per_sec": 0, 00:17:01.622 "r_mbytes_per_sec": 0, 00:17:01.622 "w_mbytes_per_sec": 0 00:17:01.622 }, 00:17:01.622 "claimed": true, 00:17:01.622 "claim_type": "exclusive_write", 00:17:01.622 "zoned": false, 00:17:01.622 "supported_io_types": { 00:17:01.622 "read": true, 00:17:01.622 "write": true, 00:17:01.622 "unmap": true, 00:17:01.622 "flush": true, 00:17:01.622 "reset": true, 00:17:01.622 "nvme_admin": false, 00:17:01.622 "nvme_io": false, 00:17:01.622 "nvme_io_md": false, 00:17:01.622 "write_zeroes": true, 00:17:01.622 "zcopy": true, 00:17:01.622 "get_zone_info": false, 00:17:01.622 "zone_management": false, 00:17:01.622 "zone_append": false, 00:17:01.622 "compare": false, 00:17:01.622 "compare_and_write": false, 00:17:01.622 "abort": true, 00:17:01.622 "seek_hole": false, 00:17:01.622 "seek_data": false, 00:17:01.622 "copy": true, 00:17:01.622 "nvme_iov_md": false 00:17:01.622 }, 00:17:01.622 "memory_domains": [ 00:17:01.622 { 00:17:01.622 "dma_device_id": "system", 00:17:01.622 "dma_device_type": 1 00:17:01.622 }, 00:17:01.622 { 00:17:01.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.622 "dma_device_type": 2 00:17:01.622 } 00:17:01.622 ], 00:17:01.622 "driver_specific": {} 00:17:01.622 } 00:17:01.622 ] 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.622 "name": "Existed_Raid", 00:17:01.622 "uuid": "9d75aa84-d531-4d19-a00d-f82a8fa5b25c", 00:17:01.622 "strip_size_kb": 64, 00:17:01.622 "state": "online", 00:17:01.622 "raid_level": "raid5f", 00:17:01.622 "superblock": true, 00:17:01.622 "num_base_bdevs": 4, 00:17:01.622 "num_base_bdevs_discovered": 4, 00:17:01.622 "num_base_bdevs_operational": 4, 00:17:01.622 "base_bdevs_list": [ 00:17:01.622 { 00:17:01.622 "name": "NewBaseBdev", 00:17:01.622 "uuid": "00869134-e5d8-4415-9c06-54f44f57f03e", 00:17:01.622 "is_configured": true, 00:17:01.622 "data_offset": 2048, 00:17:01.622 "data_size": 63488 00:17:01.622 }, 00:17:01.622 { 00:17:01.622 "name": "BaseBdev2", 00:17:01.622 "uuid": "a35b7fd0-0184-4075-8649-65b16edf0fcf", 00:17:01.622 "is_configured": true, 00:17:01.622 "data_offset": 2048, 00:17:01.622 "data_size": 63488 00:17:01.622 }, 00:17:01.622 { 00:17:01.622 "name": "BaseBdev3", 00:17:01.622 "uuid": "b46be4c7-9db0-4881-920a-ea02ca50dd50", 00:17:01.622 "is_configured": true, 00:17:01.622 "data_offset": 2048, 00:17:01.622 "data_size": 63488 00:17:01.622 }, 00:17:01.622 { 00:17:01.622 "name": "BaseBdev4", 00:17:01.622 "uuid": "2d89d4cb-155d-4410-ba23-814f7777bfbc", 00:17:01.622 "is_configured": true, 00:17:01.622 "data_offset": 2048, 00:17:01.622 "data_size": 63488 00:17:01.622 } 00:17:01.622 ] 00:17:01.622 }' 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.622 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.193 [2024-09-30 12:34:13.835343] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.193 "name": "Existed_Raid", 00:17:02.193 "aliases": [ 00:17:02.193 "9d75aa84-d531-4d19-a00d-f82a8fa5b25c" 00:17:02.193 ], 00:17:02.193 "product_name": "Raid Volume", 00:17:02.193 "block_size": 512, 00:17:02.193 "num_blocks": 190464, 00:17:02.193 "uuid": "9d75aa84-d531-4d19-a00d-f82a8fa5b25c", 00:17:02.193 "assigned_rate_limits": { 00:17:02.193 "rw_ios_per_sec": 0, 00:17:02.193 "rw_mbytes_per_sec": 0, 00:17:02.193 "r_mbytes_per_sec": 0, 00:17:02.193 "w_mbytes_per_sec": 0 00:17:02.193 }, 00:17:02.193 "claimed": false, 00:17:02.193 "zoned": false, 00:17:02.193 "supported_io_types": { 00:17:02.193 "read": true, 00:17:02.193 "write": true, 00:17:02.193 "unmap": false, 00:17:02.193 "flush": false, 00:17:02.193 "reset": true, 00:17:02.193 "nvme_admin": false, 00:17:02.193 "nvme_io": false, 00:17:02.193 "nvme_io_md": false, 00:17:02.193 "write_zeroes": true, 00:17:02.193 "zcopy": false, 00:17:02.193 "get_zone_info": false, 00:17:02.193 "zone_management": false, 00:17:02.193 "zone_append": false, 00:17:02.193 "compare": false, 00:17:02.193 "compare_and_write": false, 00:17:02.193 "abort": false, 00:17:02.193 "seek_hole": false, 00:17:02.193 "seek_data": false, 00:17:02.193 "copy": false, 00:17:02.193 "nvme_iov_md": false 00:17:02.193 }, 00:17:02.193 "driver_specific": { 00:17:02.193 "raid": { 00:17:02.193 "uuid": "9d75aa84-d531-4d19-a00d-f82a8fa5b25c", 00:17:02.193 "strip_size_kb": 64, 00:17:02.193 "state": "online", 00:17:02.193 "raid_level": "raid5f", 00:17:02.193 "superblock": true, 00:17:02.193 "num_base_bdevs": 4, 00:17:02.193 "num_base_bdevs_discovered": 4, 00:17:02.193 "num_base_bdevs_operational": 4, 00:17:02.193 "base_bdevs_list": [ 00:17:02.193 { 00:17:02.193 "name": "NewBaseBdev", 00:17:02.193 "uuid": "00869134-e5d8-4415-9c06-54f44f57f03e", 00:17:02.193 "is_configured": true, 00:17:02.193 "data_offset": 2048, 00:17:02.193 "data_size": 63488 00:17:02.193 }, 00:17:02.193 { 00:17:02.193 "name": "BaseBdev2", 00:17:02.193 "uuid": "a35b7fd0-0184-4075-8649-65b16edf0fcf", 00:17:02.193 "is_configured": true, 00:17:02.193 "data_offset": 2048, 00:17:02.193 "data_size": 63488 00:17:02.193 }, 00:17:02.193 { 00:17:02.193 "name": "BaseBdev3", 00:17:02.193 "uuid": "b46be4c7-9db0-4881-920a-ea02ca50dd50", 00:17:02.193 "is_configured": true, 00:17:02.193 "data_offset": 2048, 00:17:02.193 "data_size": 63488 00:17:02.193 }, 00:17:02.193 { 00:17:02.193 "name": "BaseBdev4", 00:17:02.193 "uuid": "2d89d4cb-155d-4410-ba23-814f7777bfbc", 00:17:02.193 "is_configured": true, 00:17:02.193 "data_offset": 2048, 00:17:02.193 "data_size": 63488 00:17:02.193 } 00:17:02.193 ] 00:17:02.193 } 00:17:02.193 } 00:17:02.193 }' 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:02.193 BaseBdev2 00:17:02.193 BaseBdev3 00:17:02.193 BaseBdev4' 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.193 12:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.193 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.454 [2024-09-30 12:34:14.186833] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.454 [2024-09-30 12:34:14.186859] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.454 [2024-09-30 12:34:14.186917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.454 [2024-09-30 12:34:14.187169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.454 [2024-09-30 12:34:14.187185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83278 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83278 ']' 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83278 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83278 00:17:02.454 killing process with pid 83278 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83278' 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83278 00:17:02.454 [2024-09-30 12:34:14.234439] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.454 12:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83278 00:17:02.714 [2024-09-30 12:34:14.601752] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.096 12:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:04.096 00:17:04.096 real 0m11.646s 00:17:04.096 user 0m18.492s 00:17:04.096 sys 0m2.124s 00:17:04.096 12:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.096 12:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.096 ************************************ 00:17:04.096 END TEST raid5f_state_function_test_sb 00:17:04.096 ************************************ 00:17:04.096 12:34:15 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:04.096 12:34:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:04.096 12:34:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:04.096 12:34:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.096 ************************************ 00:17:04.096 START TEST raid5f_superblock_test 00:17:04.096 ************************************ 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83957 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83957 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83957 ']' 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:04.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:04.096 12:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.096 [2024-09-30 12:34:15.974019] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:04.096 [2024-09-30 12:34:15.974164] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83957 ] 00:17:04.356 [2024-09-30 12:34:16.144182] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.616 [2024-09-30 12:34:16.336563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.876 [2024-09-30 12:34:16.513640] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.876 [2024-09-30 12:34:16.513694] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.876 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.876 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:04.876 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:04.876 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.137 malloc1 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.137 [2024-09-30 12:34:16.822256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.137 [2024-09-30 12:34:16.822324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.137 [2024-09-30 12:34:16.822347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:05.137 [2024-09-30 12:34:16.822358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.137 [2024-09-30 12:34:16.824199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.137 [2024-09-30 12:34:16.824238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.137 pt1 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.137 malloc2 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.137 [2024-09-30 12:34:16.910992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.137 [2024-09-30 12:34:16.911044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.137 [2024-09-30 12:34:16.911064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:05.137 [2024-09-30 12:34:16.911074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.137 [2024-09-30 12:34:16.912976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.137 [2024-09-30 12:34:16.913011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.137 pt2 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.137 malloc3 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.137 [2024-09-30 12:34:16.964432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:05.137 [2024-09-30 12:34:16.964480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.137 [2024-09-30 12:34:16.964498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:05.137 [2024-09-30 12:34:16.964507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.137 [2024-09-30 12:34:16.966360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.137 [2024-09-30 12:34:16.966393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:05.137 pt3 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.137 12:34:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.137 malloc4 00:17:05.137 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.137 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:05.137 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.137 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.137 [2024-09-30 12:34:17.012794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:05.137 [2024-09-30 12:34:17.012842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.137 [2024-09-30 12:34:17.012858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:05.137 [2024-09-30 12:34:17.012866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.137 [2024-09-30 12:34:17.014686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.137 [2024-09-30 12:34:17.014722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:05.137 pt4 00:17:05.137 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.137 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.137 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.137 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:05.137 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.137 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.137 [2024-09-30 12:34:17.024829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.137 [2024-09-30 12:34:17.026415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.137 [2024-09-30 12:34:17.026475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:05.137 [2024-09-30 12:34:17.026530] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:05.137 [2024-09-30 12:34:17.026709] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:05.137 [2024-09-30 12:34:17.026733] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:05.137 [2024-09-30 12:34:17.026981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:05.397 [2024-09-30 12:34:17.033217] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:05.397 [2024-09-30 12:34:17.033241] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:05.397 [2024-09-30 12:34:17.033418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.397 "name": "raid_bdev1", 00:17:05.397 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:05.397 "strip_size_kb": 64, 00:17:05.397 "state": "online", 00:17:05.397 "raid_level": "raid5f", 00:17:05.397 "superblock": true, 00:17:05.397 "num_base_bdevs": 4, 00:17:05.397 "num_base_bdevs_discovered": 4, 00:17:05.397 "num_base_bdevs_operational": 4, 00:17:05.397 "base_bdevs_list": [ 00:17:05.397 { 00:17:05.397 "name": "pt1", 00:17:05.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.397 "is_configured": true, 00:17:05.397 "data_offset": 2048, 00:17:05.397 "data_size": 63488 00:17:05.397 }, 00:17:05.397 { 00:17:05.397 "name": "pt2", 00:17:05.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.397 "is_configured": true, 00:17:05.397 "data_offset": 2048, 00:17:05.397 "data_size": 63488 00:17:05.397 }, 00:17:05.397 { 00:17:05.397 "name": "pt3", 00:17:05.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.397 "is_configured": true, 00:17:05.397 "data_offset": 2048, 00:17:05.397 "data_size": 63488 00:17:05.397 }, 00:17:05.397 { 00:17:05.397 "name": "pt4", 00:17:05.397 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.397 "is_configured": true, 00:17:05.397 "data_offset": 2048, 00:17:05.397 "data_size": 63488 00:17:05.397 } 00:17:05.397 ] 00:17:05.397 }' 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.397 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.657 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:05.658 [2024-09-30 12:34:17.464247] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:05.658 "name": "raid_bdev1", 00:17:05.658 "aliases": [ 00:17:05.658 "da5187e6-b082-4756-8690-96701454ef51" 00:17:05.658 ], 00:17:05.658 "product_name": "Raid Volume", 00:17:05.658 "block_size": 512, 00:17:05.658 "num_blocks": 190464, 00:17:05.658 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:05.658 "assigned_rate_limits": { 00:17:05.658 "rw_ios_per_sec": 0, 00:17:05.658 "rw_mbytes_per_sec": 0, 00:17:05.658 "r_mbytes_per_sec": 0, 00:17:05.658 "w_mbytes_per_sec": 0 00:17:05.658 }, 00:17:05.658 "claimed": false, 00:17:05.658 "zoned": false, 00:17:05.658 "supported_io_types": { 00:17:05.658 "read": true, 00:17:05.658 "write": true, 00:17:05.658 "unmap": false, 00:17:05.658 "flush": false, 00:17:05.658 "reset": true, 00:17:05.658 "nvme_admin": false, 00:17:05.658 "nvme_io": false, 00:17:05.658 "nvme_io_md": false, 00:17:05.658 "write_zeroes": true, 00:17:05.658 "zcopy": false, 00:17:05.658 "get_zone_info": false, 00:17:05.658 "zone_management": false, 00:17:05.658 "zone_append": false, 00:17:05.658 "compare": false, 00:17:05.658 "compare_and_write": false, 00:17:05.658 "abort": false, 00:17:05.658 "seek_hole": false, 00:17:05.658 "seek_data": false, 00:17:05.658 "copy": false, 00:17:05.658 "nvme_iov_md": false 00:17:05.658 }, 00:17:05.658 "driver_specific": { 00:17:05.658 "raid": { 00:17:05.658 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:05.658 "strip_size_kb": 64, 00:17:05.658 "state": "online", 00:17:05.658 "raid_level": "raid5f", 00:17:05.658 "superblock": true, 00:17:05.658 "num_base_bdevs": 4, 00:17:05.658 "num_base_bdevs_discovered": 4, 00:17:05.658 "num_base_bdevs_operational": 4, 00:17:05.658 "base_bdevs_list": [ 00:17:05.658 { 00:17:05.658 "name": "pt1", 00:17:05.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.658 "is_configured": true, 00:17:05.658 "data_offset": 2048, 00:17:05.658 "data_size": 63488 00:17:05.658 }, 00:17:05.658 { 00:17:05.658 "name": "pt2", 00:17:05.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.658 "is_configured": true, 00:17:05.658 "data_offset": 2048, 00:17:05.658 "data_size": 63488 00:17:05.658 }, 00:17:05.658 { 00:17:05.658 "name": "pt3", 00:17:05.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.658 "is_configured": true, 00:17:05.658 "data_offset": 2048, 00:17:05.658 "data_size": 63488 00:17:05.658 }, 00:17:05.658 { 00:17:05.658 "name": "pt4", 00:17:05.658 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.658 "is_configured": true, 00:17:05.658 "data_offset": 2048, 00:17:05.658 "data_size": 63488 00:17:05.658 } 00:17:05.658 ] 00:17:05.658 } 00:17:05.658 } 00:17:05.658 }' 00:17:05.658 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:05.918 pt2 00:17:05.918 pt3 00:17:05.918 pt4' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.918 [2024-09-30 12:34:17.771897] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=da5187e6-b082-4756-8690-96701454ef51 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z da5187e6-b082-4756-8690-96701454ef51 ']' 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.918 [2024-09-30 12:34:17.799687] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.918 [2024-09-30 12:34:17.799711] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.918 [2024-09-30 12:34:17.799783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.918 [2024-09-30 12:34:17.799851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.918 [2024-09-30 12:34:17.799870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.918 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.178 [2024-09-30 12:34:17.959536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:06.178 [2024-09-30 12:34:17.961303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:06.178 [2024-09-30 12:34:17.961345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:06.178 [2024-09-30 12:34:17.961375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:06.178 [2024-09-30 12:34:17.961415] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:06.178 [2024-09-30 12:34:17.961454] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:06.178 [2024-09-30 12:34:17.961472] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:06.178 [2024-09-30 12:34:17.961489] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:06.178 [2024-09-30 12:34:17.961501] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.178 [2024-09-30 12:34:17.961512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:06.178 request: 00:17:06.178 { 00:17:06.178 "name": "raid_bdev1", 00:17:06.178 "raid_level": "raid5f", 00:17:06.178 "base_bdevs": [ 00:17:06.178 "malloc1", 00:17:06.178 "malloc2", 00:17:06.178 "malloc3", 00:17:06.178 "malloc4" 00:17:06.178 ], 00:17:06.178 "strip_size_kb": 64, 00:17:06.178 "superblock": false, 00:17:06.178 "method": "bdev_raid_create", 00:17:06.178 "req_id": 1 00:17:06.178 } 00:17:06.178 Got JSON-RPC error response 00:17:06.178 response: 00:17:06.178 { 00:17:06.178 "code": -17, 00:17:06.178 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:06.178 } 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.178 12:34:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.178 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:06.178 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:06.178 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:06.178 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.178 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.178 [2024-09-30 12:34:18.027386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:06.178 [2024-09-30 12:34:18.027484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.178 [2024-09-30 12:34:18.027513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:06.178 [2024-09-30 12:34:18.027541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.178 [2024-09-30 12:34:18.029588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.178 [2024-09-30 12:34:18.029663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:06.178 [2024-09-30 12:34:18.029751] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:06.178 [2024-09-30 12:34:18.029825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.178 pt1 00:17:06.178 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.179 "name": "raid_bdev1", 00:17:06.179 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:06.179 "strip_size_kb": 64, 00:17:06.179 "state": "configuring", 00:17:06.179 "raid_level": "raid5f", 00:17:06.179 "superblock": true, 00:17:06.179 "num_base_bdevs": 4, 00:17:06.179 "num_base_bdevs_discovered": 1, 00:17:06.179 "num_base_bdevs_operational": 4, 00:17:06.179 "base_bdevs_list": [ 00:17:06.179 { 00:17:06.179 "name": "pt1", 00:17:06.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.179 "is_configured": true, 00:17:06.179 "data_offset": 2048, 00:17:06.179 "data_size": 63488 00:17:06.179 }, 00:17:06.179 { 00:17:06.179 "name": null, 00:17:06.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.179 "is_configured": false, 00:17:06.179 "data_offset": 2048, 00:17:06.179 "data_size": 63488 00:17:06.179 }, 00:17:06.179 { 00:17:06.179 "name": null, 00:17:06.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.179 "is_configured": false, 00:17:06.179 "data_offset": 2048, 00:17:06.179 "data_size": 63488 00:17:06.179 }, 00:17:06.179 { 00:17:06.179 "name": null, 00:17:06.179 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:06.179 "is_configured": false, 00:17:06.179 "data_offset": 2048, 00:17:06.179 "data_size": 63488 00:17:06.179 } 00:17:06.179 ] 00:17:06.179 }' 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.179 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.751 [2024-09-30 12:34:18.502579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:06.751 [2024-09-30 12:34:18.502670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.751 [2024-09-30 12:34:18.502687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:06.751 [2024-09-30 12:34:18.502698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.751 [2024-09-30 12:34:18.503033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.751 [2024-09-30 12:34:18.503054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:06.751 [2024-09-30 12:34:18.503104] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:06.751 [2024-09-30 12:34:18.503123] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:06.751 pt2 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.751 [2024-09-30 12:34:18.514575] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.751 "name": "raid_bdev1", 00:17:06.751 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:06.751 "strip_size_kb": 64, 00:17:06.751 "state": "configuring", 00:17:06.751 "raid_level": "raid5f", 00:17:06.751 "superblock": true, 00:17:06.751 "num_base_bdevs": 4, 00:17:06.751 "num_base_bdevs_discovered": 1, 00:17:06.751 "num_base_bdevs_operational": 4, 00:17:06.751 "base_bdevs_list": [ 00:17:06.751 { 00:17:06.751 "name": "pt1", 00:17:06.751 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.751 "is_configured": true, 00:17:06.751 "data_offset": 2048, 00:17:06.751 "data_size": 63488 00:17:06.751 }, 00:17:06.751 { 00:17:06.751 "name": null, 00:17:06.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.751 "is_configured": false, 00:17:06.751 "data_offset": 0, 00:17:06.751 "data_size": 63488 00:17:06.751 }, 00:17:06.751 { 00:17:06.751 "name": null, 00:17:06.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.751 "is_configured": false, 00:17:06.751 "data_offset": 2048, 00:17:06.751 "data_size": 63488 00:17:06.751 }, 00:17:06.751 { 00:17:06.751 "name": null, 00:17:06.751 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:06.751 "is_configured": false, 00:17:06.751 "data_offset": 2048, 00:17:06.751 "data_size": 63488 00:17:06.751 } 00:17:06.751 ] 00:17:06.751 }' 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.751 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.330 [2024-09-30 12:34:18.937844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.330 [2024-09-30 12:34:18.937937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.330 [2024-09-30 12:34:18.937970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:07.330 [2024-09-30 12:34:18.938023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.330 [2024-09-30 12:34:18.938365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.330 [2024-09-30 12:34:18.938418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.330 [2024-09-30 12:34:18.938499] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:07.330 [2024-09-30 12:34:18.938551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.330 pt2 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.330 [2024-09-30 12:34:18.949853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:07.330 [2024-09-30 12:34:18.949941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.330 [2024-09-30 12:34:18.949971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:07.330 [2024-09-30 12:34:18.950018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.330 [2024-09-30 12:34:18.950336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.330 [2024-09-30 12:34:18.950393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:07.330 [2024-09-30 12:34:18.950473] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:07.330 [2024-09-30 12:34:18.950521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:07.330 pt3 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.330 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.330 [2024-09-30 12:34:18.961829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:07.330 [2024-09-30 12:34:18.961918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.330 [2024-09-30 12:34:18.961967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:07.330 [2024-09-30 12:34:18.961977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.330 [2024-09-30 12:34:18.962304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.330 [2024-09-30 12:34:18.962321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:07.331 [2024-09-30 12:34:18.962372] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:07.331 [2024-09-30 12:34:18.962393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:07.331 [2024-09-30 12:34:18.962514] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:07.331 [2024-09-30 12:34:18.962523] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:07.331 [2024-09-30 12:34:18.962738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:07.331 [2024-09-30 12:34:18.969355] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:07.331 pt4 00:17:07.331 [2024-09-30 12:34:18.969429] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:07.331 [2024-09-30 12:34:18.969607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.331 12:34:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.331 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.331 "name": "raid_bdev1", 00:17:07.331 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:07.331 "strip_size_kb": 64, 00:17:07.331 "state": "online", 00:17:07.331 "raid_level": "raid5f", 00:17:07.331 "superblock": true, 00:17:07.331 "num_base_bdevs": 4, 00:17:07.331 "num_base_bdevs_discovered": 4, 00:17:07.331 "num_base_bdevs_operational": 4, 00:17:07.331 "base_bdevs_list": [ 00:17:07.331 { 00:17:07.331 "name": "pt1", 00:17:07.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.331 "is_configured": true, 00:17:07.331 "data_offset": 2048, 00:17:07.331 "data_size": 63488 00:17:07.331 }, 00:17:07.331 { 00:17:07.331 "name": "pt2", 00:17:07.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.331 "is_configured": true, 00:17:07.331 "data_offset": 2048, 00:17:07.331 "data_size": 63488 00:17:07.331 }, 00:17:07.331 { 00:17:07.331 "name": "pt3", 00:17:07.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.331 "is_configured": true, 00:17:07.331 "data_offset": 2048, 00:17:07.331 "data_size": 63488 00:17:07.331 }, 00:17:07.331 { 00:17:07.331 "name": "pt4", 00:17:07.331 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:07.331 "is_configured": true, 00:17:07.331 "data_offset": 2048, 00:17:07.331 "data_size": 63488 00:17:07.331 } 00:17:07.331 ] 00:17:07.331 }' 00:17:07.331 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.331 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:07.597 [2024-09-30 12:34:19.421013] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:07.597 "name": "raid_bdev1", 00:17:07.597 "aliases": [ 00:17:07.597 "da5187e6-b082-4756-8690-96701454ef51" 00:17:07.597 ], 00:17:07.597 "product_name": "Raid Volume", 00:17:07.597 "block_size": 512, 00:17:07.597 "num_blocks": 190464, 00:17:07.597 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:07.597 "assigned_rate_limits": { 00:17:07.597 "rw_ios_per_sec": 0, 00:17:07.597 "rw_mbytes_per_sec": 0, 00:17:07.597 "r_mbytes_per_sec": 0, 00:17:07.597 "w_mbytes_per_sec": 0 00:17:07.597 }, 00:17:07.597 "claimed": false, 00:17:07.597 "zoned": false, 00:17:07.597 "supported_io_types": { 00:17:07.597 "read": true, 00:17:07.597 "write": true, 00:17:07.597 "unmap": false, 00:17:07.597 "flush": false, 00:17:07.597 "reset": true, 00:17:07.597 "nvme_admin": false, 00:17:07.597 "nvme_io": false, 00:17:07.597 "nvme_io_md": false, 00:17:07.597 "write_zeroes": true, 00:17:07.597 "zcopy": false, 00:17:07.597 "get_zone_info": false, 00:17:07.597 "zone_management": false, 00:17:07.597 "zone_append": false, 00:17:07.597 "compare": false, 00:17:07.597 "compare_and_write": false, 00:17:07.597 "abort": false, 00:17:07.597 "seek_hole": false, 00:17:07.597 "seek_data": false, 00:17:07.597 "copy": false, 00:17:07.597 "nvme_iov_md": false 00:17:07.597 }, 00:17:07.597 "driver_specific": { 00:17:07.597 "raid": { 00:17:07.597 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:07.597 "strip_size_kb": 64, 00:17:07.597 "state": "online", 00:17:07.597 "raid_level": "raid5f", 00:17:07.597 "superblock": true, 00:17:07.597 "num_base_bdevs": 4, 00:17:07.597 "num_base_bdevs_discovered": 4, 00:17:07.597 "num_base_bdevs_operational": 4, 00:17:07.597 "base_bdevs_list": [ 00:17:07.597 { 00:17:07.597 "name": "pt1", 00:17:07.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.597 "is_configured": true, 00:17:07.597 "data_offset": 2048, 00:17:07.597 "data_size": 63488 00:17:07.597 }, 00:17:07.597 { 00:17:07.597 "name": "pt2", 00:17:07.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.597 "is_configured": true, 00:17:07.597 "data_offset": 2048, 00:17:07.597 "data_size": 63488 00:17:07.597 }, 00:17:07.597 { 00:17:07.597 "name": "pt3", 00:17:07.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.597 "is_configured": true, 00:17:07.597 "data_offset": 2048, 00:17:07.597 "data_size": 63488 00:17:07.597 }, 00:17:07.597 { 00:17:07.597 "name": "pt4", 00:17:07.597 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:07.597 "is_configured": true, 00:17:07.597 "data_offset": 2048, 00:17:07.597 "data_size": 63488 00:17:07.597 } 00:17:07.597 ] 00:17:07.597 } 00:17:07.597 } 00:17:07.597 }' 00:17:07.597 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:07.866 pt2 00:17:07.866 pt3 00:17:07.866 pt4' 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.866 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:07.867 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.867 [2024-09-30 12:34:19.752407] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' da5187e6-b082-4756-8690-96701454ef51 '!=' da5187e6-b082-4756-8690-96701454ef51 ']' 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.126 [2024-09-30 12:34:19.800246] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.126 "name": "raid_bdev1", 00:17:08.126 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:08.126 "strip_size_kb": 64, 00:17:08.126 "state": "online", 00:17:08.126 "raid_level": "raid5f", 00:17:08.126 "superblock": true, 00:17:08.126 "num_base_bdevs": 4, 00:17:08.126 "num_base_bdevs_discovered": 3, 00:17:08.126 "num_base_bdevs_operational": 3, 00:17:08.126 "base_bdevs_list": [ 00:17:08.126 { 00:17:08.126 "name": null, 00:17:08.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.126 "is_configured": false, 00:17:08.126 "data_offset": 0, 00:17:08.126 "data_size": 63488 00:17:08.126 }, 00:17:08.126 { 00:17:08.126 "name": "pt2", 00:17:08.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.126 "is_configured": true, 00:17:08.126 "data_offset": 2048, 00:17:08.126 "data_size": 63488 00:17:08.126 }, 00:17:08.126 { 00:17:08.126 "name": "pt3", 00:17:08.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:08.126 "is_configured": true, 00:17:08.126 "data_offset": 2048, 00:17:08.126 "data_size": 63488 00:17:08.126 }, 00:17:08.126 { 00:17:08.126 "name": "pt4", 00:17:08.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:08.126 "is_configured": true, 00:17:08.126 "data_offset": 2048, 00:17:08.126 "data_size": 63488 00:17:08.126 } 00:17:08.126 ] 00:17:08.126 }' 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.126 12:34:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.386 [2024-09-30 12:34:20.191567] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.386 [2024-09-30 12:34:20.191641] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.386 [2024-09-30 12:34:20.191708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.386 [2024-09-30 12:34:20.191818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.386 [2024-09-30 12:34:20.191862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.386 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.646 [2024-09-30 12:34:20.287400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:08.646 [2024-09-30 12:34:20.287508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.646 [2024-09-30 12:34:20.287527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:08.646 [2024-09-30 12:34:20.287535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.646 [2024-09-30 12:34:20.289539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.646 [2024-09-30 12:34:20.289578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:08.646 [2024-09-30 12:34:20.289638] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:08.646 [2024-09-30 12:34:20.289675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.646 pt2 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.646 "name": "raid_bdev1", 00:17:08.646 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:08.646 "strip_size_kb": 64, 00:17:08.646 "state": "configuring", 00:17:08.646 "raid_level": "raid5f", 00:17:08.646 "superblock": true, 00:17:08.646 "num_base_bdevs": 4, 00:17:08.646 "num_base_bdevs_discovered": 1, 00:17:08.646 "num_base_bdevs_operational": 3, 00:17:08.646 "base_bdevs_list": [ 00:17:08.646 { 00:17:08.646 "name": null, 00:17:08.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.646 "is_configured": false, 00:17:08.646 "data_offset": 2048, 00:17:08.646 "data_size": 63488 00:17:08.646 }, 00:17:08.646 { 00:17:08.646 "name": "pt2", 00:17:08.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.646 "is_configured": true, 00:17:08.646 "data_offset": 2048, 00:17:08.646 "data_size": 63488 00:17:08.646 }, 00:17:08.646 { 00:17:08.646 "name": null, 00:17:08.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:08.646 "is_configured": false, 00:17:08.646 "data_offset": 2048, 00:17:08.646 "data_size": 63488 00:17:08.646 }, 00:17:08.646 { 00:17:08.646 "name": null, 00:17:08.646 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:08.646 "is_configured": false, 00:17:08.646 "data_offset": 2048, 00:17:08.646 "data_size": 63488 00:17:08.646 } 00:17:08.646 ] 00:17:08.646 }' 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.646 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.906 [2024-09-30 12:34:20.742612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:08.906 [2024-09-30 12:34:20.742702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.906 [2024-09-30 12:34:20.742733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:08.906 [2024-09-30 12:34:20.742772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.906 [2024-09-30 12:34:20.743107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.906 [2024-09-30 12:34:20.743161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:08.906 [2024-09-30 12:34:20.743238] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:08.906 [2024-09-30 12:34:20.743293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:08.906 pt3 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.906 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.165 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.165 "name": "raid_bdev1", 00:17:09.165 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:09.165 "strip_size_kb": 64, 00:17:09.165 "state": "configuring", 00:17:09.165 "raid_level": "raid5f", 00:17:09.165 "superblock": true, 00:17:09.165 "num_base_bdevs": 4, 00:17:09.165 "num_base_bdevs_discovered": 2, 00:17:09.165 "num_base_bdevs_operational": 3, 00:17:09.165 "base_bdevs_list": [ 00:17:09.165 { 00:17:09.165 "name": null, 00:17:09.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.165 "is_configured": false, 00:17:09.165 "data_offset": 2048, 00:17:09.165 "data_size": 63488 00:17:09.165 }, 00:17:09.165 { 00:17:09.165 "name": "pt2", 00:17:09.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.165 "is_configured": true, 00:17:09.165 "data_offset": 2048, 00:17:09.165 "data_size": 63488 00:17:09.165 }, 00:17:09.165 { 00:17:09.165 "name": "pt3", 00:17:09.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.165 "is_configured": true, 00:17:09.165 "data_offset": 2048, 00:17:09.165 "data_size": 63488 00:17:09.165 }, 00:17:09.165 { 00:17:09.165 "name": null, 00:17:09.165 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:09.165 "is_configured": false, 00:17:09.165 "data_offset": 2048, 00:17:09.165 "data_size": 63488 00:17:09.165 } 00:17:09.165 ] 00:17:09.165 }' 00:17:09.165 12:34:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.165 12:34:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.425 [2024-09-30 12:34:21.209850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:09.425 [2024-09-30 12:34:21.209895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.425 [2024-09-30 12:34:21.209912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:09.425 [2024-09-30 12:34:21.209920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.425 [2024-09-30 12:34:21.210260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.425 [2024-09-30 12:34:21.210277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:09.425 [2024-09-30 12:34:21.210334] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:09.425 [2024-09-30 12:34:21.210350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:09.425 [2024-09-30 12:34:21.210458] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:09.425 [2024-09-30 12:34:21.210466] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:09.425 [2024-09-30 12:34:21.210680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:09.425 [2024-09-30 12:34:21.217555] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:09.425 [2024-09-30 12:34:21.217579] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:09.425 [2024-09-30 12:34:21.217859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.425 pt4 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.425 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.425 "name": "raid_bdev1", 00:17:09.425 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:09.425 "strip_size_kb": 64, 00:17:09.425 "state": "online", 00:17:09.425 "raid_level": "raid5f", 00:17:09.425 "superblock": true, 00:17:09.425 "num_base_bdevs": 4, 00:17:09.425 "num_base_bdevs_discovered": 3, 00:17:09.425 "num_base_bdevs_operational": 3, 00:17:09.425 "base_bdevs_list": [ 00:17:09.425 { 00:17:09.425 "name": null, 00:17:09.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.425 "is_configured": false, 00:17:09.425 "data_offset": 2048, 00:17:09.425 "data_size": 63488 00:17:09.425 }, 00:17:09.425 { 00:17:09.425 "name": "pt2", 00:17:09.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.425 "is_configured": true, 00:17:09.425 "data_offset": 2048, 00:17:09.425 "data_size": 63488 00:17:09.426 }, 00:17:09.426 { 00:17:09.426 "name": "pt3", 00:17:09.426 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.426 "is_configured": true, 00:17:09.426 "data_offset": 2048, 00:17:09.426 "data_size": 63488 00:17:09.426 }, 00:17:09.426 { 00:17:09.426 "name": "pt4", 00:17:09.426 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:09.426 "is_configured": true, 00:17:09.426 "data_offset": 2048, 00:17:09.426 "data_size": 63488 00:17:09.426 } 00:17:09.426 ] 00:17:09.426 }' 00:17:09.426 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.426 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.995 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.995 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.995 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.995 [2024-09-30 12:34:21.681189] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.995 [2024-09-30 12:34:21.681261] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.995 [2024-09-30 12:34:21.681348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.995 [2024-09-30 12:34:21.681438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.995 [2024-09-30 12:34:21.681484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:09.995 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.995 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.995 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.995 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.996 [2024-09-30 12:34:21.753079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:09.996 [2024-09-30 12:34:21.753182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.996 [2024-09-30 12:34:21.753213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:09.996 [2024-09-30 12:34:21.753242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.996 [2024-09-30 12:34:21.755296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.996 [2024-09-30 12:34:21.755367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:09.996 [2024-09-30 12:34:21.755477] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:09.996 [2024-09-30 12:34:21.755561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:09.996 [2024-09-30 12:34:21.755695] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:09.996 [2024-09-30 12:34:21.755764] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.996 [2024-09-30 12:34:21.755841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:09.996 [2024-09-30 12:34:21.755931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.996 [2024-09-30 12:34:21.756062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:09.996 pt1 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.996 "name": "raid_bdev1", 00:17:09.996 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:09.996 "strip_size_kb": 64, 00:17:09.996 "state": "configuring", 00:17:09.996 "raid_level": "raid5f", 00:17:09.996 "superblock": true, 00:17:09.996 "num_base_bdevs": 4, 00:17:09.996 "num_base_bdevs_discovered": 2, 00:17:09.996 "num_base_bdevs_operational": 3, 00:17:09.996 "base_bdevs_list": [ 00:17:09.996 { 00:17:09.996 "name": null, 00:17:09.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.996 "is_configured": false, 00:17:09.996 "data_offset": 2048, 00:17:09.996 "data_size": 63488 00:17:09.996 }, 00:17:09.996 { 00:17:09.996 "name": "pt2", 00:17:09.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.996 "is_configured": true, 00:17:09.996 "data_offset": 2048, 00:17:09.996 "data_size": 63488 00:17:09.996 }, 00:17:09.996 { 00:17:09.996 "name": "pt3", 00:17:09.996 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.996 "is_configured": true, 00:17:09.996 "data_offset": 2048, 00:17:09.996 "data_size": 63488 00:17:09.996 }, 00:17:09.996 { 00:17:09.996 "name": null, 00:17:09.996 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:09.996 "is_configured": false, 00:17:09.996 "data_offset": 2048, 00:17:09.996 "data_size": 63488 00:17:09.996 } 00:17:09.996 ] 00:17:09.996 }' 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.996 12:34:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.565 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:10.565 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:10.565 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.565 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.565 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.565 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:10.565 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:10.565 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.565 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.565 [2024-09-30 12:34:22.288169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:10.566 [2024-09-30 12:34:22.288215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.566 [2024-09-30 12:34:22.288236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:10.566 [2024-09-30 12:34:22.288244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.566 [2024-09-30 12:34:22.288572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.566 [2024-09-30 12:34:22.288587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:10.566 [2024-09-30 12:34:22.288643] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:10.566 [2024-09-30 12:34:22.288659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:10.566 [2024-09-30 12:34:22.288793] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:10.566 [2024-09-30 12:34:22.288802] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:10.566 [2024-09-30 12:34:22.289022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:10.566 [2024-09-30 12:34:22.296019] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:10.566 [2024-09-30 12:34:22.296042] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:10.566 [2024-09-30 12:34:22.296273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.566 pt4 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.566 "name": "raid_bdev1", 00:17:10.566 "uuid": "da5187e6-b082-4756-8690-96701454ef51", 00:17:10.566 "strip_size_kb": 64, 00:17:10.566 "state": "online", 00:17:10.566 "raid_level": "raid5f", 00:17:10.566 "superblock": true, 00:17:10.566 "num_base_bdevs": 4, 00:17:10.566 "num_base_bdevs_discovered": 3, 00:17:10.566 "num_base_bdevs_operational": 3, 00:17:10.566 "base_bdevs_list": [ 00:17:10.566 { 00:17:10.566 "name": null, 00:17:10.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.566 "is_configured": false, 00:17:10.566 "data_offset": 2048, 00:17:10.566 "data_size": 63488 00:17:10.566 }, 00:17:10.566 { 00:17:10.566 "name": "pt2", 00:17:10.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.566 "is_configured": true, 00:17:10.566 "data_offset": 2048, 00:17:10.566 "data_size": 63488 00:17:10.566 }, 00:17:10.566 { 00:17:10.566 "name": "pt3", 00:17:10.566 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:10.566 "is_configured": true, 00:17:10.566 "data_offset": 2048, 00:17:10.566 "data_size": 63488 00:17:10.566 }, 00:17:10.566 { 00:17:10.566 "name": "pt4", 00:17:10.566 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:10.566 "is_configured": true, 00:17:10.566 "data_offset": 2048, 00:17:10.566 "data_size": 63488 00:17:10.566 } 00:17:10.566 ] 00:17:10.566 }' 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.566 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.826 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:10.826 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:10.826 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.086 [2024-09-30 12:34:22.775729] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' da5187e6-b082-4756-8690-96701454ef51 '!=' da5187e6-b082-4756-8690-96701454ef51 ']' 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83957 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83957 ']' 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83957 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83957 00:17:11.086 killing process with pid 83957 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83957' 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 83957 00:17:11.086 [2024-09-30 12:34:22.856067] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.086 [2024-09-30 12:34:22.856126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.086 [2024-09-30 12:34:22.856181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.086 [2024-09-30 12:34:22.856191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:11.086 12:34:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 83957 00:17:11.346 [2024-09-30 12:34:23.225758] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.727 12:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:12.727 00:17:12.727 real 0m8.546s 00:17:12.727 user 0m13.326s 00:17:12.727 sys 0m1.639s 00:17:12.727 ************************************ 00:17:12.727 END TEST raid5f_superblock_test 00:17:12.727 ************************************ 00:17:12.727 12:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:12.727 12:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.727 12:34:24 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:12.727 12:34:24 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:12.727 12:34:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:12.727 12:34:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:12.727 12:34:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.727 ************************************ 00:17:12.727 START TEST raid5f_rebuild_test 00:17:12.727 ************************************ 00:17:12.727 12:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:17:12.727 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:12.727 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:12.727 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:12.727 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:12.727 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84443 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84443 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 84443 ']' 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.728 12:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.728 [2024-09-30 12:34:24.617048] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:12.728 [2024-09-30 12:34:24.617267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84443 ] 00:17:12.728 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:12.728 Zero copy mechanism will not be used. 00:17:12.989 [2024-09-30 12:34:24.786978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.249 [2024-09-30 12:34:24.976855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.509 [2024-09-30 12:34:25.157633] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.509 [2024-09-30 12:34:25.157733] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.770 BaseBdev1_malloc 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.770 [2024-09-30 12:34:25.461101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:13.770 [2024-09-30 12:34:25.461254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.770 [2024-09-30 12:34:25.461281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:13.770 [2024-09-30 12:34:25.461295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.770 [2024-09-30 12:34:25.463217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.770 [2024-09-30 12:34:25.463256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:13.770 BaseBdev1 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.770 BaseBdev2_malloc 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.770 [2024-09-30 12:34:25.543007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:13.770 [2024-09-30 12:34:25.543065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.770 [2024-09-30 12:34:25.543083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:13.770 [2024-09-30 12:34:25.543097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.770 [2024-09-30 12:34:25.545069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.770 [2024-09-30 12:34:25.545109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:13.770 BaseBdev2 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.770 BaseBdev3_malloc 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.770 [2024-09-30 12:34:25.592120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:13.770 [2024-09-30 12:34:25.592171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.770 [2024-09-30 12:34:25.592190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:13.770 [2024-09-30 12:34:25.592201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.770 [2024-09-30 12:34:25.594118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.770 [2024-09-30 12:34:25.594158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:13.770 BaseBdev3 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.770 BaseBdev4_malloc 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.770 [2024-09-30 12:34:25.645011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:13.770 [2024-09-30 12:34:25.645063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.770 [2024-09-30 12:34:25.645080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:13.770 [2024-09-30 12:34:25.645090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.770 [2024-09-30 12:34:25.647146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.770 [2024-09-30 12:34:25.647220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:13.770 BaseBdev4 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.770 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.031 spare_malloc 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.031 spare_delay 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.031 [2024-09-30 12:34:25.709110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:14.031 [2024-09-30 12:34:25.709166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.031 [2024-09-30 12:34:25.709184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:14.031 [2024-09-30 12:34:25.709194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.031 [2024-09-30 12:34:25.711126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.031 [2024-09-30 12:34:25.711164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:14.031 spare 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.031 [2024-09-30 12:34:25.721148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.031 [2024-09-30 12:34:25.722828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.031 [2024-09-30 12:34:25.722957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:14.031 [2024-09-30 12:34:25.723011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:14.031 [2024-09-30 12:34:25.723091] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:14.031 [2024-09-30 12:34:25.723101] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:14.031 [2024-09-30 12:34:25.723334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:14.031 [2024-09-30 12:34:25.729787] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:14.031 [2024-09-30 12:34:25.729856] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:14.031 [2024-09-30 12:34:25.730044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.031 "name": "raid_bdev1", 00:17:14.031 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:14.031 "strip_size_kb": 64, 00:17:14.031 "state": "online", 00:17:14.031 "raid_level": "raid5f", 00:17:14.031 "superblock": false, 00:17:14.031 "num_base_bdevs": 4, 00:17:14.031 "num_base_bdevs_discovered": 4, 00:17:14.031 "num_base_bdevs_operational": 4, 00:17:14.031 "base_bdevs_list": [ 00:17:14.031 { 00:17:14.031 "name": "BaseBdev1", 00:17:14.031 "uuid": "3e64203a-d97b-54e8-9658-512099e5aa17", 00:17:14.031 "is_configured": true, 00:17:14.031 "data_offset": 0, 00:17:14.031 "data_size": 65536 00:17:14.031 }, 00:17:14.031 { 00:17:14.031 "name": "BaseBdev2", 00:17:14.031 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:14.031 "is_configured": true, 00:17:14.031 "data_offset": 0, 00:17:14.031 "data_size": 65536 00:17:14.031 }, 00:17:14.031 { 00:17:14.031 "name": "BaseBdev3", 00:17:14.031 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:14.031 "is_configured": true, 00:17:14.031 "data_offset": 0, 00:17:14.031 "data_size": 65536 00:17:14.031 }, 00:17:14.031 { 00:17:14.031 "name": "BaseBdev4", 00:17:14.031 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:14.031 "is_configured": true, 00:17:14.031 "data_offset": 0, 00:17:14.031 "data_size": 65536 00:17:14.031 } 00:17:14.031 ] 00:17:14.031 }' 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.031 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.292 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:14.292 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.292 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.292 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.292 [2024-09-30 12:34:26.120953] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.292 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.292 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:14.292 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.292 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.292 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.292 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:14.292 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:14.552 [2024-09-30 12:34:26.372370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:14.552 /dev/nbd0 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.552 1+0 records in 00:17:14.552 1+0 records out 00:17:14.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498659 s, 8.2 MB/s 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:14.552 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.812 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:14.812 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:14.812 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:14.812 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.812 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:14.812 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:14.812 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:14.812 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:15.072 512+0 records in 00:17:15.072 512+0 records out 00:17:15.072 100663296 bytes (101 MB, 96 MiB) copied, 0.497138 s, 202 MB/s 00:17:15.072 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:15.072 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.072 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:15.072 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:15.072 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:15.073 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.073 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:15.333 [2024-09-30 12:34:27.152076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.333 [2024-09-30 12:34:27.167606] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.333 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.594 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.594 "name": "raid_bdev1", 00:17:15.594 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:15.594 "strip_size_kb": 64, 00:17:15.594 "state": "online", 00:17:15.594 "raid_level": "raid5f", 00:17:15.594 "superblock": false, 00:17:15.594 "num_base_bdevs": 4, 00:17:15.594 "num_base_bdevs_discovered": 3, 00:17:15.594 "num_base_bdevs_operational": 3, 00:17:15.594 "base_bdevs_list": [ 00:17:15.594 { 00:17:15.594 "name": null, 00:17:15.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.594 "is_configured": false, 00:17:15.594 "data_offset": 0, 00:17:15.594 "data_size": 65536 00:17:15.594 }, 00:17:15.594 { 00:17:15.594 "name": "BaseBdev2", 00:17:15.594 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:15.594 "is_configured": true, 00:17:15.594 "data_offset": 0, 00:17:15.594 "data_size": 65536 00:17:15.594 }, 00:17:15.594 { 00:17:15.594 "name": "BaseBdev3", 00:17:15.594 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:15.594 "is_configured": true, 00:17:15.594 "data_offset": 0, 00:17:15.594 "data_size": 65536 00:17:15.594 }, 00:17:15.594 { 00:17:15.594 "name": "BaseBdev4", 00:17:15.594 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:15.594 "is_configured": true, 00:17:15.594 "data_offset": 0, 00:17:15.594 "data_size": 65536 00:17:15.594 } 00:17:15.594 ] 00:17:15.594 }' 00:17:15.594 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.594 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.854 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.854 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.854 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.854 [2024-09-30 12:34:27.586822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.854 [2024-09-30 12:34:27.601524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:15.854 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.854 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:15.854 [2024-09-30 12:34:27.610316] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.793 "name": "raid_bdev1", 00:17:16.793 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:16.793 "strip_size_kb": 64, 00:17:16.793 "state": "online", 00:17:16.793 "raid_level": "raid5f", 00:17:16.793 "superblock": false, 00:17:16.793 "num_base_bdevs": 4, 00:17:16.793 "num_base_bdevs_discovered": 4, 00:17:16.793 "num_base_bdevs_operational": 4, 00:17:16.793 "process": { 00:17:16.793 "type": "rebuild", 00:17:16.793 "target": "spare", 00:17:16.793 "progress": { 00:17:16.793 "blocks": 19200, 00:17:16.793 "percent": 9 00:17:16.793 } 00:17:16.793 }, 00:17:16.793 "base_bdevs_list": [ 00:17:16.793 { 00:17:16.793 "name": "spare", 00:17:16.793 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:16.793 "is_configured": true, 00:17:16.793 "data_offset": 0, 00:17:16.793 "data_size": 65536 00:17:16.793 }, 00:17:16.793 { 00:17:16.793 "name": "BaseBdev2", 00:17:16.793 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:16.793 "is_configured": true, 00:17:16.793 "data_offset": 0, 00:17:16.793 "data_size": 65536 00:17:16.793 }, 00:17:16.793 { 00:17:16.793 "name": "BaseBdev3", 00:17:16.793 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:16.793 "is_configured": true, 00:17:16.793 "data_offset": 0, 00:17:16.793 "data_size": 65536 00:17:16.793 }, 00:17:16.793 { 00:17:16.793 "name": "BaseBdev4", 00:17:16.793 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:16.793 "is_configured": true, 00:17:16.793 "data_offset": 0, 00:17:16.793 "data_size": 65536 00:17:16.793 } 00:17:16.793 ] 00:17:16.793 }' 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.793 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.053 [2024-09-30 12:34:28.744962] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.053 [2024-09-30 12:34:28.817640] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.053 [2024-09-30 12:34:28.817724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.053 [2024-09-30 12:34:28.817749] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.053 [2024-09-30 12:34:28.817761] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.053 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.053 "name": "raid_bdev1", 00:17:17.053 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:17.054 "strip_size_kb": 64, 00:17:17.054 "state": "online", 00:17:17.054 "raid_level": "raid5f", 00:17:17.054 "superblock": false, 00:17:17.054 "num_base_bdevs": 4, 00:17:17.054 "num_base_bdevs_discovered": 3, 00:17:17.054 "num_base_bdevs_operational": 3, 00:17:17.054 "base_bdevs_list": [ 00:17:17.054 { 00:17:17.054 "name": null, 00:17:17.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.054 "is_configured": false, 00:17:17.054 "data_offset": 0, 00:17:17.054 "data_size": 65536 00:17:17.054 }, 00:17:17.054 { 00:17:17.054 "name": "BaseBdev2", 00:17:17.054 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:17.054 "is_configured": true, 00:17:17.054 "data_offset": 0, 00:17:17.054 "data_size": 65536 00:17:17.054 }, 00:17:17.054 { 00:17:17.054 "name": "BaseBdev3", 00:17:17.054 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:17.054 "is_configured": true, 00:17:17.054 "data_offset": 0, 00:17:17.054 "data_size": 65536 00:17:17.054 }, 00:17:17.054 { 00:17:17.054 "name": "BaseBdev4", 00:17:17.054 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:17.054 "is_configured": true, 00:17:17.054 "data_offset": 0, 00:17:17.054 "data_size": 65536 00:17:17.054 } 00:17:17.054 ] 00:17:17.054 }' 00:17:17.054 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.054 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.624 "name": "raid_bdev1", 00:17:17.624 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:17.624 "strip_size_kb": 64, 00:17:17.624 "state": "online", 00:17:17.624 "raid_level": "raid5f", 00:17:17.624 "superblock": false, 00:17:17.624 "num_base_bdevs": 4, 00:17:17.624 "num_base_bdevs_discovered": 3, 00:17:17.624 "num_base_bdevs_operational": 3, 00:17:17.624 "base_bdevs_list": [ 00:17:17.624 { 00:17:17.624 "name": null, 00:17:17.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.624 "is_configured": false, 00:17:17.624 "data_offset": 0, 00:17:17.624 "data_size": 65536 00:17:17.624 }, 00:17:17.624 { 00:17:17.624 "name": "BaseBdev2", 00:17:17.624 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:17.624 "is_configured": true, 00:17:17.624 "data_offset": 0, 00:17:17.624 "data_size": 65536 00:17:17.624 }, 00:17:17.624 { 00:17:17.624 "name": "BaseBdev3", 00:17:17.624 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:17.624 "is_configured": true, 00:17:17.624 "data_offset": 0, 00:17:17.624 "data_size": 65536 00:17:17.624 }, 00:17:17.624 { 00:17:17.624 "name": "BaseBdev4", 00:17:17.624 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:17.624 "is_configured": true, 00:17:17.624 "data_offset": 0, 00:17:17.624 "data_size": 65536 00:17:17.624 } 00:17:17.624 ] 00:17:17.624 }' 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.624 [2024-09-30 12:34:29.443197] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.624 [2024-09-30 12:34:29.456798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.624 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:17.624 [2024-09-30 12:34:29.465991] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.005 "name": "raid_bdev1", 00:17:19.005 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:19.005 "strip_size_kb": 64, 00:17:19.005 "state": "online", 00:17:19.005 "raid_level": "raid5f", 00:17:19.005 "superblock": false, 00:17:19.005 "num_base_bdevs": 4, 00:17:19.005 "num_base_bdevs_discovered": 4, 00:17:19.005 "num_base_bdevs_operational": 4, 00:17:19.005 "process": { 00:17:19.005 "type": "rebuild", 00:17:19.005 "target": "spare", 00:17:19.005 "progress": { 00:17:19.005 "blocks": 19200, 00:17:19.005 "percent": 9 00:17:19.005 } 00:17:19.005 }, 00:17:19.005 "base_bdevs_list": [ 00:17:19.005 { 00:17:19.005 "name": "spare", 00:17:19.005 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:19.005 "is_configured": true, 00:17:19.005 "data_offset": 0, 00:17:19.005 "data_size": 65536 00:17:19.005 }, 00:17:19.005 { 00:17:19.005 "name": "BaseBdev2", 00:17:19.005 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:19.005 "is_configured": true, 00:17:19.005 "data_offset": 0, 00:17:19.005 "data_size": 65536 00:17:19.005 }, 00:17:19.005 { 00:17:19.005 "name": "BaseBdev3", 00:17:19.005 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:19.005 "is_configured": true, 00:17:19.005 "data_offset": 0, 00:17:19.005 "data_size": 65536 00:17:19.005 }, 00:17:19.005 { 00:17:19.005 "name": "BaseBdev4", 00:17:19.005 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:19.005 "is_configured": true, 00:17:19.005 "data_offset": 0, 00:17:19.005 "data_size": 65536 00:17:19.005 } 00:17:19.005 ] 00:17:19.005 }' 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=615 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.005 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.005 "name": "raid_bdev1", 00:17:19.006 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:19.006 "strip_size_kb": 64, 00:17:19.006 "state": "online", 00:17:19.006 "raid_level": "raid5f", 00:17:19.006 "superblock": false, 00:17:19.006 "num_base_bdevs": 4, 00:17:19.006 "num_base_bdevs_discovered": 4, 00:17:19.006 "num_base_bdevs_operational": 4, 00:17:19.006 "process": { 00:17:19.006 "type": "rebuild", 00:17:19.006 "target": "spare", 00:17:19.006 "progress": { 00:17:19.006 "blocks": 21120, 00:17:19.006 "percent": 10 00:17:19.006 } 00:17:19.006 }, 00:17:19.006 "base_bdevs_list": [ 00:17:19.006 { 00:17:19.006 "name": "spare", 00:17:19.006 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:19.006 "is_configured": true, 00:17:19.006 "data_offset": 0, 00:17:19.006 "data_size": 65536 00:17:19.006 }, 00:17:19.006 { 00:17:19.006 "name": "BaseBdev2", 00:17:19.006 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:19.006 "is_configured": true, 00:17:19.006 "data_offset": 0, 00:17:19.006 "data_size": 65536 00:17:19.006 }, 00:17:19.006 { 00:17:19.006 "name": "BaseBdev3", 00:17:19.006 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:19.006 "is_configured": true, 00:17:19.006 "data_offset": 0, 00:17:19.006 "data_size": 65536 00:17:19.006 }, 00:17:19.006 { 00:17:19.006 "name": "BaseBdev4", 00:17:19.006 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:19.006 "is_configured": true, 00:17:19.006 "data_offset": 0, 00:17:19.006 "data_size": 65536 00:17:19.006 } 00:17:19.006 ] 00:17:19.006 }' 00:17:19.006 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.006 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.006 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.006 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.006 12:34:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.945 "name": "raid_bdev1", 00:17:19.945 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:19.945 "strip_size_kb": 64, 00:17:19.945 "state": "online", 00:17:19.945 "raid_level": "raid5f", 00:17:19.945 "superblock": false, 00:17:19.945 "num_base_bdevs": 4, 00:17:19.945 "num_base_bdevs_discovered": 4, 00:17:19.945 "num_base_bdevs_operational": 4, 00:17:19.945 "process": { 00:17:19.945 "type": "rebuild", 00:17:19.945 "target": "spare", 00:17:19.945 "progress": { 00:17:19.945 "blocks": 42240, 00:17:19.945 "percent": 21 00:17:19.945 } 00:17:19.945 }, 00:17:19.945 "base_bdevs_list": [ 00:17:19.945 { 00:17:19.945 "name": "spare", 00:17:19.945 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:19.945 "is_configured": true, 00:17:19.945 "data_offset": 0, 00:17:19.945 "data_size": 65536 00:17:19.945 }, 00:17:19.945 { 00:17:19.945 "name": "BaseBdev2", 00:17:19.945 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:19.945 "is_configured": true, 00:17:19.945 "data_offset": 0, 00:17:19.945 "data_size": 65536 00:17:19.945 }, 00:17:19.945 { 00:17:19.945 "name": "BaseBdev3", 00:17:19.945 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:19.945 "is_configured": true, 00:17:19.945 "data_offset": 0, 00:17:19.945 "data_size": 65536 00:17:19.945 }, 00:17:19.945 { 00:17:19.945 "name": "BaseBdev4", 00:17:19.945 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:19.945 "is_configured": true, 00:17:19.945 "data_offset": 0, 00:17:19.945 "data_size": 65536 00:17:19.945 } 00:17:19.945 ] 00:17:19.945 }' 00:17:19.945 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.205 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.205 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.205 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.205 12:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.144 12:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.144 12:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.144 12:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.144 12:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.144 12:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.144 12:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.144 12:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.144 12:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.145 12:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.145 12:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.145 12:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.145 12:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.145 "name": "raid_bdev1", 00:17:21.145 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:21.145 "strip_size_kb": 64, 00:17:21.145 "state": "online", 00:17:21.145 "raid_level": "raid5f", 00:17:21.145 "superblock": false, 00:17:21.145 "num_base_bdevs": 4, 00:17:21.145 "num_base_bdevs_discovered": 4, 00:17:21.145 "num_base_bdevs_operational": 4, 00:17:21.145 "process": { 00:17:21.145 "type": "rebuild", 00:17:21.145 "target": "spare", 00:17:21.145 "progress": { 00:17:21.145 "blocks": 65280, 00:17:21.145 "percent": 33 00:17:21.145 } 00:17:21.145 }, 00:17:21.145 "base_bdevs_list": [ 00:17:21.145 { 00:17:21.145 "name": "spare", 00:17:21.145 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:21.145 "is_configured": true, 00:17:21.145 "data_offset": 0, 00:17:21.145 "data_size": 65536 00:17:21.145 }, 00:17:21.145 { 00:17:21.145 "name": "BaseBdev2", 00:17:21.145 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:21.145 "is_configured": true, 00:17:21.145 "data_offset": 0, 00:17:21.145 "data_size": 65536 00:17:21.145 }, 00:17:21.145 { 00:17:21.145 "name": "BaseBdev3", 00:17:21.145 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:21.145 "is_configured": true, 00:17:21.145 "data_offset": 0, 00:17:21.145 "data_size": 65536 00:17:21.145 }, 00:17:21.145 { 00:17:21.145 "name": "BaseBdev4", 00:17:21.145 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:21.145 "is_configured": true, 00:17:21.145 "data_offset": 0, 00:17:21.145 "data_size": 65536 00:17:21.145 } 00:17:21.145 ] 00:17:21.145 }' 00:17:21.145 12:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.145 12:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.145 12:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.145 12:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.145 12:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.526 "name": "raid_bdev1", 00:17:22.526 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:22.526 "strip_size_kb": 64, 00:17:22.526 "state": "online", 00:17:22.526 "raid_level": "raid5f", 00:17:22.526 "superblock": false, 00:17:22.526 "num_base_bdevs": 4, 00:17:22.526 "num_base_bdevs_discovered": 4, 00:17:22.526 "num_base_bdevs_operational": 4, 00:17:22.526 "process": { 00:17:22.526 "type": "rebuild", 00:17:22.526 "target": "spare", 00:17:22.526 "progress": { 00:17:22.526 "blocks": 86400, 00:17:22.526 "percent": 43 00:17:22.526 } 00:17:22.526 }, 00:17:22.526 "base_bdevs_list": [ 00:17:22.526 { 00:17:22.526 "name": "spare", 00:17:22.526 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:22.526 "is_configured": true, 00:17:22.526 "data_offset": 0, 00:17:22.526 "data_size": 65536 00:17:22.526 }, 00:17:22.526 { 00:17:22.526 "name": "BaseBdev2", 00:17:22.526 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:22.526 "is_configured": true, 00:17:22.526 "data_offset": 0, 00:17:22.526 "data_size": 65536 00:17:22.526 }, 00:17:22.526 { 00:17:22.526 "name": "BaseBdev3", 00:17:22.526 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:22.526 "is_configured": true, 00:17:22.526 "data_offset": 0, 00:17:22.526 "data_size": 65536 00:17:22.526 }, 00:17:22.526 { 00:17:22.526 "name": "BaseBdev4", 00:17:22.526 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:22.526 "is_configured": true, 00:17:22.526 "data_offset": 0, 00:17:22.526 "data_size": 65536 00:17:22.526 } 00:17:22.526 ] 00:17:22.526 }' 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.526 12:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.465 "name": "raid_bdev1", 00:17:23.465 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:23.465 "strip_size_kb": 64, 00:17:23.465 "state": "online", 00:17:23.465 "raid_level": "raid5f", 00:17:23.465 "superblock": false, 00:17:23.465 "num_base_bdevs": 4, 00:17:23.465 "num_base_bdevs_discovered": 4, 00:17:23.465 "num_base_bdevs_operational": 4, 00:17:23.465 "process": { 00:17:23.465 "type": "rebuild", 00:17:23.465 "target": "spare", 00:17:23.465 "progress": { 00:17:23.465 "blocks": 109440, 00:17:23.465 "percent": 55 00:17:23.465 } 00:17:23.465 }, 00:17:23.465 "base_bdevs_list": [ 00:17:23.465 { 00:17:23.465 "name": "spare", 00:17:23.465 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:23.465 "is_configured": true, 00:17:23.465 "data_offset": 0, 00:17:23.465 "data_size": 65536 00:17:23.465 }, 00:17:23.465 { 00:17:23.465 "name": "BaseBdev2", 00:17:23.465 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:23.465 "is_configured": true, 00:17:23.465 "data_offset": 0, 00:17:23.465 "data_size": 65536 00:17:23.465 }, 00:17:23.465 { 00:17:23.465 "name": "BaseBdev3", 00:17:23.465 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:23.465 "is_configured": true, 00:17:23.465 "data_offset": 0, 00:17:23.465 "data_size": 65536 00:17:23.465 }, 00:17:23.465 { 00:17:23.465 "name": "BaseBdev4", 00:17:23.465 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:23.465 "is_configured": true, 00:17:23.465 "data_offset": 0, 00:17:23.465 "data_size": 65536 00:17:23.465 } 00:17:23.465 ] 00:17:23.465 }' 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.465 12:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.847 "name": "raid_bdev1", 00:17:24.847 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:24.847 "strip_size_kb": 64, 00:17:24.847 "state": "online", 00:17:24.847 "raid_level": "raid5f", 00:17:24.847 "superblock": false, 00:17:24.847 "num_base_bdevs": 4, 00:17:24.847 "num_base_bdevs_discovered": 4, 00:17:24.847 "num_base_bdevs_operational": 4, 00:17:24.847 "process": { 00:17:24.847 "type": "rebuild", 00:17:24.847 "target": "spare", 00:17:24.847 "progress": { 00:17:24.847 "blocks": 130560, 00:17:24.847 "percent": 66 00:17:24.847 } 00:17:24.847 }, 00:17:24.847 "base_bdevs_list": [ 00:17:24.847 { 00:17:24.847 "name": "spare", 00:17:24.847 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:24.847 "is_configured": true, 00:17:24.847 "data_offset": 0, 00:17:24.847 "data_size": 65536 00:17:24.847 }, 00:17:24.847 { 00:17:24.847 "name": "BaseBdev2", 00:17:24.847 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:24.847 "is_configured": true, 00:17:24.847 "data_offset": 0, 00:17:24.847 "data_size": 65536 00:17:24.847 }, 00:17:24.847 { 00:17:24.847 "name": "BaseBdev3", 00:17:24.847 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:24.847 "is_configured": true, 00:17:24.847 "data_offset": 0, 00:17:24.847 "data_size": 65536 00:17:24.847 }, 00:17:24.847 { 00:17:24.847 "name": "BaseBdev4", 00:17:24.847 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:24.847 "is_configured": true, 00:17:24.847 "data_offset": 0, 00:17:24.847 "data_size": 65536 00:17:24.847 } 00:17:24.847 ] 00:17:24.847 }' 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.847 12:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.787 "name": "raid_bdev1", 00:17:25.787 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:25.787 "strip_size_kb": 64, 00:17:25.787 "state": "online", 00:17:25.787 "raid_level": "raid5f", 00:17:25.787 "superblock": false, 00:17:25.787 "num_base_bdevs": 4, 00:17:25.787 "num_base_bdevs_discovered": 4, 00:17:25.787 "num_base_bdevs_operational": 4, 00:17:25.787 "process": { 00:17:25.787 "type": "rebuild", 00:17:25.787 "target": "spare", 00:17:25.787 "progress": { 00:17:25.787 "blocks": 151680, 00:17:25.787 "percent": 77 00:17:25.787 } 00:17:25.787 }, 00:17:25.787 "base_bdevs_list": [ 00:17:25.787 { 00:17:25.787 "name": "spare", 00:17:25.787 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:25.787 "is_configured": true, 00:17:25.787 "data_offset": 0, 00:17:25.787 "data_size": 65536 00:17:25.787 }, 00:17:25.787 { 00:17:25.787 "name": "BaseBdev2", 00:17:25.787 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:25.787 "is_configured": true, 00:17:25.787 "data_offset": 0, 00:17:25.787 "data_size": 65536 00:17:25.787 }, 00:17:25.787 { 00:17:25.787 "name": "BaseBdev3", 00:17:25.787 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:25.787 "is_configured": true, 00:17:25.787 "data_offset": 0, 00:17:25.787 "data_size": 65536 00:17:25.787 }, 00:17:25.787 { 00:17:25.787 "name": "BaseBdev4", 00:17:25.787 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:25.787 "is_configured": true, 00:17:25.787 "data_offset": 0, 00:17:25.787 "data_size": 65536 00:17:25.787 } 00:17:25.787 ] 00:17:25.787 }' 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.787 12:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.167 "name": "raid_bdev1", 00:17:27.167 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:27.167 "strip_size_kb": 64, 00:17:27.167 "state": "online", 00:17:27.167 "raid_level": "raid5f", 00:17:27.167 "superblock": false, 00:17:27.167 "num_base_bdevs": 4, 00:17:27.167 "num_base_bdevs_discovered": 4, 00:17:27.167 "num_base_bdevs_operational": 4, 00:17:27.167 "process": { 00:17:27.167 "type": "rebuild", 00:17:27.167 "target": "spare", 00:17:27.167 "progress": { 00:17:27.167 "blocks": 174720, 00:17:27.167 "percent": 88 00:17:27.167 } 00:17:27.167 }, 00:17:27.167 "base_bdevs_list": [ 00:17:27.167 { 00:17:27.167 "name": "spare", 00:17:27.167 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:27.167 "is_configured": true, 00:17:27.167 "data_offset": 0, 00:17:27.167 "data_size": 65536 00:17:27.167 }, 00:17:27.167 { 00:17:27.167 "name": "BaseBdev2", 00:17:27.167 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:27.167 "is_configured": true, 00:17:27.167 "data_offset": 0, 00:17:27.167 "data_size": 65536 00:17:27.167 }, 00:17:27.167 { 00:17:27.167 "name": "BaseBdev3", 00:17:27.167 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:27.167 "is_configured": true, 00:17:27.167 "data_offset": 0, 00:17:27.167 "data_size": 65536 00:17:27.167 }, 00:17:27.167 { 00:17:27.167 "name": "BaseBdev4", 00:17:27.167 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:27.167 "is_configured": true, 00:17:27.167 "data_offset": 0, 00:17:27.167 "data_size": 65536 00:17:27.167 } 00:17:27.167 ] 00:17:27.167 }' 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.167 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.168 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.168 12:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.107 [2024-09-30 12:34:39.820445] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:28.107 [2024-09-30 12:34:39.820519] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:28.107 [2024-09-30 12:34:39.820571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.107 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.107 "name": "raid_bdev1", 00:17:28.107 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:28.107 "strip_size_kb": 64, 00:17:28.107 "state": "online", 00:17:28.107 "raid_level": "raid5f", 00:17:28.107 "superblock": false, 00:17:28.107 "num_base_bdevs": 4, 00:17:28.107 "num_base_bdevs_discovered": 4, 00:17:28.107 "num_base_bdevs_operational": 4, 00:17:28.107 "process": { 00:17:28.107 "type": "rebuild", 00:17:28.107 "target": "spare", 00:17:28.107 "progress": { 00:17:28.107 "blocks": 195840, 00:17:28.107 "percent": 99 00:17:28.107 } 00:17:28.107 }, 00:17:28.107 "base_bdevs_list": [ 00:17:28.107 { 00:17:28.107 "name": "spare", 00:17:28.108 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:28.108 "is_configured": true, 00:17:28.108 "data_offset": 0, 00:17:28.108 "data_size": 65536 00:17:28.108 }, 00:17:28.108 { 00:17:28.108 "name": "BaseBdev2", 00:17:28.108 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:28.108 "is_configured": true, 00:17:28.108 "data_offset": 0, 00:17:28.108 "data_size": 65536 00:17:28.108 }, 00:17:28.108 { 00:17:28.108 "name": "BaseBdev3", 00:17:28.108 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:28.108 "is_configured": true, 00:17:28.108 "data_offset": 0, 00:17:28.108 "data_size": 65536 00:17:28.108 }, 00:17:28.108 { 00:17:28.108 "name": "BaseBdev4", 00:17:28.108 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:28.108 "is_configured": true, 00:17:28.108 "data_offset": 0, 00:17:28.108 "data_size": 65536 00:17:28.108 } 00:17:28.108 ] 00:17:28.108 }' 00:17:28.108 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.108 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.108 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.108 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.108 12:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.047 12:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.047 12:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.047 12:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.047 12:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.047 12:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.047 12:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.047 12:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.047 12:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.047 12:34:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.047 12:34:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.307 12:34:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.308 12:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.308 "name": "raid_bdev1", 00:17:29.308 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:29.308 "strip_size_kb": 64, 00:17:29.308 "state": "online", 00:17:29.308 "raid_level": "raid5f", 00:17:29.308 "superblock": false, 00:17:29.308 "num_base_bdevs": 4, 00:17:29.308 "num_base_bdevs_discovered": 4, 00:17:29.308 "num_base_bdevs_operational": 4, 00:17:29.308 "base_bdevs_list": [ 00:17:29.308 { 00:17:29.308 "name": "spare", 00:17:29.308 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:29.308 "is_configured": true, 00:17:29.308 "data_offset": 0, 00:17:29.308 "data_size": 65536 00:17:29.308 }, 00:17:29.308 { 00:17:29.308 "name": "BaseBdev2", 00:17:29.308 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:29.308 "is_configured": true, 00:17:29.308 "data_offset": 0, 00:17:29.308 "data_size": 65536 00:17:29.308 }, 00:17:29.308 { 00:17:29.308 "name": "BaseBdev3", 00:17:29.308 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:29.308 "is_configured": true, 00:17:29.308 "data_offset": 0, 00:17:29.308 "data_size": 65536 00:17:29.308 }, 00:17:29.308 { 00:17:29.308 "name": "BaseBdev4", 00:17:29.308 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:29.308 "is_configured": true, 00:17:29.308 "data_offset": 0, 00:17:29.308 "data_size": 65536 00:17:29.308 } 00:17:29.308 ] 00:17:29.308 }' 00:17:29.308 12:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.308 "name": "raid_bdev1", 00:17:29.308 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:29.308 "strip_size_kb": 64, 00:17:29.308 "state": "online", 00:17:29.308 "raid_level": "raid5f", 00:17:29.308 "superblock": false, 00:17:29.308 "num_base_bdevs": 4, 00:17:29.308 "num_base_bdevs_discovered": 4, 00:17:29.308 "num_base_bdevs_operational": 4, 00:17:29.308 "base_bdevs_list": [ 00:17:29.308 { 00:17:29.308 "name": "spare", 00:17:29.308 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:29.308 "is_configured": true, 00:17:29.308 "data_offset": 0, 00:17:29.308 "data_size": 65536 00:17:29.308 }, 00:17:29.308 { 00:17:29.308 "name": "BaseBdev2", 00:17:29.308 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:29.308 "is_configured": true, 00:17:29.308 "data_offset": 0, 00:17:29.308 "data_size": 65536 00:17:29.308 }, 00:17:29.308 { 00:17:29.308 "name": "BaseBdev3", 00:17:29.308 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:29.308 "is_configured": true, 00:17:29.308 "data_offset": 0, 00:17:29.308 "data_size": 65536 00:17:29.308 }, 00:17:29.308 { 00:17:29.308 "name": "BaseBdev4", 00:17:29.308 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:29.308 "is_configured": true, 00:17:29.308 "data_offset": 0, 00:17:29.308 "data_size": 65536 00:17:29.308 } 00:17:29.308 ] 00:17:29.308 }' 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.308 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.567 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.567 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:29.567 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.567 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.567 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.567 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.567 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.567 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.567 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.568 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.568 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.568 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.568 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.568 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.568 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.568 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.568 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.568 "name": "raid_bdev1", 00:17:29.568 "uuid": "cee10dc4-bf1a-4e9d-8fc9-35e5bc60db6c", 00:17:29.568 "strip_size_kb": 64, 00:17:29.568 "state": "online", 00:17:29.568 "raid_level": "raid5f", 00:17:29.568 "superblock": false, 00:17:29.568 "num_base_bdevs": 4, 00:17:29.568 "num_base_bdevs_discovered": 4, 00:17:29.568 "num_base_bdevs_operational": 4, 00:17:29.568 "base_bdevs_list": [ 00:17:29.568 { 00:17:29.568 "name": "spare", 00:17:29.568 "uuid": "613518c5-e8ce-593b-bc92-897c220c86d3", 00:17:29.568 "is_configured": true, 00:17:29.568 "data_offset": 0, 00:17:29.568 "data_size": 65536 00:17:29.568 }, 00:17:29.568 { 00:17:29.568 "name": "BaseBdev2", 00:17:29.568 "uuid": "36bb69ca-3ff4-5db9-8346-4a02d060d807", 00:17:29.568 "is_configured": true, 00:17:29.568 "data_offset": 0, 00:17:29.568 "data_size": 65536 00:17:29.568 }, 00:17:29.568 { 00:17:29.568 "name": "BaseBdev3", 00:17:29.568 "uuid": "def06a51-db16-5e83-a64f-90b3470a070c", 00:17:29.568 "is_configured": true, 00:17:29.568 "data_offset": 0, 00:17:29.568 "data_size": 65536 00:17:29.568 }, 00:17:29.568 { 00:17:29.568 "name": "BaseBdev4", 00:17:29.568 "uuid": "cda345c4-941e-5d8b-841d-cfc3bb34152f", 00:17:29.568 "is_configured": true, 00:17:29.568 "data_offset": 0, 00:17:29.568 "data_size": 65536 00:17:29.568 } 00:17:29.568 ] 00:17:29.568 }' 00:17:29.568 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.568 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.827 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:29.827 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.827 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.827 [2024-09-30 12:34:41.648977] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.827 [2024-09-30 12:34:41.649015] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.827 [2024-09-30 12:34:41.649123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.828 [2024-09-30 12:34:41.649231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.828 [2024-09-30 12:34:41.649241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:29.828 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:30.087 /dev/nbd0 00:17:30.087 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:30.087 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:30.087 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:30.087 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:30.087 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:30.087 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:30.087 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:30.087 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:30.087 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:30.087 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:30.088 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.088 1+0 records in 00:17:30.088 1+0 records out 00:17:30.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366958 s, 11.2 MB/s 00:17:30.088 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.088 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:30.088 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.088 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:30.088 12:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:30.088 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:30.088 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:30.088 12:34:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:30.347 /dev/nbd1 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.347 1+0 records in 00:17:30.347 1+0 records out 00:17:30.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044931 s, 9.1 MB/s 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:30.347 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:30.607 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:30.607 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.607 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:30.607 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:30.607 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:30.607 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.607 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:30.867 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:30.867 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:30.867 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:30.867 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.867 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.867 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:30.867 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:30.867 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.867 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.867 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:31.126 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:31.126 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:31.126 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:31.126 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.126 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.126 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:31.126 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:31.126 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84443 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 84443 ']' 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 84443 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84443 00:17:31.127 killing process with pid 84443 00:17:31.127 Received shutdown signal, test time was about 60.000000 seconds 00:17:31.127 00:17:31.127 Latency(us) 00:17:31.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.127 =================================================================================================================== 00:17:31.127 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84443' 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 84443 00:17:31.127 [2024-09-30 12:34:42.877245] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.127 12:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 84443 00:17:31.696 [2024-09-30 12:34:43.328780] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.634 12:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:32.634 00:17:32.634 real 0m19.991s 00:17:32.634 user 0m23.685s 00:17:32.634 sys 0m2.407s 00:17:32.634 12:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.634 12:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.634 ************************************ 00:17:32.634 END TEST raid5f_rebuild_test 00:17:32.634 ************************************ 00:17:32.894 12:34:44 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:32.894 12:34:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:32.894 12:34:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.894 12:34:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.894 ************************************ 00:17:32.894 START TEST raid5f_rebuild_test_sb 00:17:32.894 ************************************ 00:17:32.894 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:17:32.894 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:32.894 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:32.894 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84960 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84960 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84960 ']' 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.895 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.895 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:32.895 Zero copy mechanism will not be used. 00:17:32.895 [2024-09-30 12:34:44.681897] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:32.895 [2024-09-30 12:34:44.682015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84960 ] 00:17:33.154 [2024-09-30 12:34:44.850330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.154 [2024-09-30 12:34:45.044990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.413 [2024-09-30 12:34:45.232036] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.413 [2024-09-30 12:34:45.232100] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.673 BaseBdev1_malloc 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.673 [2024-09-30 12:34:45.528856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:33.673 [2024-09-30 12:34:45.528935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.673 [2024-09-30 12:34:45.528958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:33.673 [2024-09-30 12:34:45.528972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.673 [2024-09-30 12:34:45.530953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.673 [2024-09-30 12:34:45.530990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:33.673 BaseBdev1 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:33.673 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.674 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.934 BaseBdev2_malloc 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.934 [2024-09-30 12:34:45.613591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:33.934 [2024-09-30 12:34:45.613646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.934 [2024-09-30 12:34:45.613683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:33.934 [2024-09-30 12:34:45.613693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.934 [2024-09-30 12:34:45.615717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.934 [2024-09-30 12:34:45.615769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:33.934 BaseBdev2 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.934 BaseBdev3_malloc 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.934 [2024-09-30 12:34:45.665944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:33.934 [2024-09-30 12:34:45.665993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.934 [2024-09-30 12:34:45.666028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:33.934 [2024-09-30 12:34:45.666038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.934 [2024-09-30 12:34:45.667951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.934 [2024-09-30 12:34:45.667992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:33.934 BaseBdev3 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.934 BaseBdev4_malloc 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.934 [2024-09-30 12:34:45.718457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:33.934 [2024-09-30 12:34:45.718511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.934 [2024-09-30 12:34:45.718545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:33.934 [2024-09-30 12:34:45.718556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.934 [2024-09-30 12:34:45.720638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.934 [2024-09-30 12:34:45.720682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:33.934 BaseBdev4 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.934 spare_malloc 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.934 spare_delay 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.934 [2024-09-30 12:34:45.782727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:33.934 [2024-09-30 12:34:45.782795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.934 [2024-09-30 12:34:45.782829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:33.934 [2024-09-30 12:34:45.782838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.934 [2024-09-30 12:34:45.784787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.934 [2024-09-30 12:34:45.784822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:33.934 spare 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.934 [2024-09-30 12:34:45.794794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.934 [2024-09-30 12:34:45.796467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:33.934 [2024-09-30 12:34:45.796534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:33.934 [2024-09-30 12:34:45.796581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:33.934 [2024-09-30 12:34:45.796775] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:33.934 [2024-09-30 12:34:45.796796] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:33.934 [2024-09-30 12:34:45.797040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:33.934 [2024-09-30 12:34:45.803609] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:33.934 [2024-09-30 12:34:45.803646] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:33.934 [2024-09-30 12:34:45.803829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.934 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.194 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.194 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.194 "name": "raid_bdev1", 00:17:34.194 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:34.194 "strip_size_kb": 64, 00:17:34.194 "state": "online", 00:17:34.194 "raid_level": "raid5f", 00:17:34.194 "superblock": true, 00:17:34.194 "num_base_bdevs": 4, 00:17:34.194 "num_base_bdevs_discovered": 4, 00:17:34.194 "num_base_bdevs_operational": 4, 00:17:34.194 "base_bdevs_list": [ 00:17:34.194 { 00:17:34.194 "name": "BaseBdev1", 00:17:34.194 "uuid": "5d8c177a-9473-5beb-9a36-69dfd8ed4fc2", 00:17:34.194 "is_configured": true, 00:17:34.194 "data_offset": 2048, 00:17:34.194 "data_size": 63488 00:17:34.194 }, 00:17:34.194 { 00:17:34.194 "name": "BaseBdev2", 00:17:34.194 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:34.194 "is_configured": true, 00:17:34.194 "data_offset": 2048, 00:17:34.194 "data_size": 63488 00:17:34.194 }, 00:17:34.194 { 00:17:34.194 "name": "BaseBdev3", 00:17:34.194 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:34.194 "is_configured": true, 00:17:34.194 "data_offset": 2048, 00:17:34.194 "data_size": 63488 00:17:34.194 }, 00:17:34.194 { 00:17:34.194 "name": "BaseBdev4", 00:17:34.194 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:34.194 "is_configured": true, 00:17:34.194 "data_offset": 2048, 00:17:34.194 "data_size": 63488 00:17:34.194 } 00:17:34.194 ] 00:17:34.194 }' 00:17:34.194 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.194 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.454 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.454 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:34.454 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.454 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.454 [2024-09-30 12:34:46.290593] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.454 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.454 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:34.454 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.454 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.454 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.454 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:34.454 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.714 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:34.714 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:34.714 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:34.714 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:34.714 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:34.714 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.714 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:34.714 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:34.715 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:34.715 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:34.715 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:34.715 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:34.715 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:34.715 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:34.715 [2024-09-30 12:34:46.557988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:34.715 /dev/nbd0 00:17:34.974 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:34.974 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:34.974 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:34.974 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.975 1+0 records in 00:17:34.975 1+0 records out 00:17:34.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426179 s, 9.6 MB/s 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:34.975 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:35.544 496+0 records in 00:17:35.544 496+0 records out 00:17:35.544 97517568 bytes (98 MB, 93 MiB) copied, 0.51141 s, 191 MB/s 00:17:35.544 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:35.544 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:35.545 [2024-09-30 12:34:47.372436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.545 [2024-09-30 12:34:47.393226] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.545 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.805 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.805 "name": "raid_bdev1", 00:17:35.805 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:35.805 "strip_size_kb": 64, 00:17:35.805 "state": "online", 00:17:35.805 "raid_level": "raid5f", 00:17:35.805 "superblock": true, 00:17:35.805 "num_base_bdevs": 4, 00:17:35.805 "num_base_bdevs_discovered": 3, 00:17:35.805 "num_base_bdevs_operational": 3, 00:17:35.805 "base_bdevs_list": [ 00:17:35.805 { 00:17:35.805 "name": null, 00:17:35.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.805 "is_configured": false, 00:17:35.805 "data_offset": 0, 00:17:35.805 "data_size": 63488 00:17:35.805 }, 00:17:35.805 { 00:17:35.805 "name": "BaseBdev2", 00:17:35.805 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:35.805 "is_configured": true, 00:17:35.805 "data_offset": 2048, 00:17:35.805 "data_size": 63488 00:17:35.805 }, 00:17:35.805 { 00:17:35.805 "name": "BaseBdev3", 00:17:35.805 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:35.805 "is_configured": true, 00:17:35.805 "data_offset": 2048, 00:17:35.805 "data_size": 63488 00:17:35.805 }, 00:17:35.805 { 00:17:35.805 "name": "BaseBdev4", 00:17:35.805 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:35.805 "is_configured": true, 00:17:35.805 "data_offset": 2048, 00:17:35.805 "data_size": 63488 00:17:35.805 } 00:17:35.805 ] 00:17:35.805 }' 00:17:35.805 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.805 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.065 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:36.065 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.065 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.065 [2024-09-30 12:34:47.860398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.065 [2024-09-30 12:34:47.873668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:36.065 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.065 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:36.065 [2024-09-30 12:34:47.882584] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.005 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.005 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.005 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.005 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.005 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.005 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.005 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.005 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.005 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.266 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.267 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.267 "name": "raid_bdev1", 00:17:37.267 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:37.267 "strip_size_kb": 64, 00:17:37.267 "state": "online", 00:17:37.267 "raid_level": "raid5f", 00:17:37.267 "superblock": true, 00:17:37.267 "num_base_bdevs": 4, 00:17:37.267 "num_base_bdevs_discovered": 4, 00:17:37.267 "num_base_bdevs_operational": 4, 00:17:37.267 "process": { 00:17:37.267 "type": "rebuild", 00:17:37.267 "target": "spare", 00:17:37.267 "progress": { 00:17:37.267 "blocks": 19200, 00:17:37.267 "percent": 10 00:17:37.267 } 00:17:37.267 }, 00:17:37.267 "base_bdevs_list": [ 00:17:37.267 { 00:17:37.267 "name": "spare", 00:17:37.267 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:37.267 "is_configured": true, 00:17:37.267 "data_offset": 2048, 00:17:37.267 "data_size": 63488 00:17:37.267 }, 00:17:37.267 { 00:17:37.267 "name": "BaseBdev2", 00:17:37.267 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:37.267 "is_configured": true, 00:17:37.267 "data_offset": 2048, 00:17:37.267 "data_size": 63488 00:17:37.267 }, 00:17:37.267 { 00:17:37.267 "name": "BaseBdev3", 00:17:37.267 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:37.267 "is_configured": true, 00:17:37.267 "data_offset": 2048, 00:17:37.267 "data_size": 63488 00:17:37.267 }, 00:17:37.267 { 00:17:37.267 "name": "BaseBdev4", 00:17:37.267 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:37.267 "is_configured": true, 00:17:37.267 "data_offset": 2048, 00:17:37.267 "data_size": 63488 00:17:37.267 } 00:17:37.267 ] 00:17:37.267 }' 00:17:37.267 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.267 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.267 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.267 [2024-09-30 12:34:49.041263] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.267 [2024-09-30 12:34:49.088160] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.267 [2024-09-30 12:34:49.088283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.267 [2024-09-30 12:34:49.088321] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.267 [2024-09-30 12:34:49.088349] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.267 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.527 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.527 "name": "raid_bdev1", 00:17:37.527 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:37.527 "strip_size_kb": 64, 00:17:37.527 "state": "online", 00:17:37.527 "raid_level": "raid5f", 00:17:37.527 "superblock": true, 00:17:37.527 "num_base_bdevs": 4, 00:17:37.527 "num_base_bdevs_discovered": 3, 00:17:37.527 "num_base_bdevs_operational": 3, 00:17:37.527 "base_bdevs_list": [ 00:17:37.527 { 00:17:37.527 "name": null, 00:17:37.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.527 "is_configured": false, 00:17:37.527 "data_offset": 0, 00:17:37.527 "data_size": 63488 00:17:37.527 }, 00:17:37.527 { 00:17:37.527 "name": "BaseBdev2", 00:17:37.527 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:37.527 "is_configured": true, 00:17:37.527 "data_offset": 2048, 00:17:37.527 "data_size": 63488 00:17:37.527 }, 00:17:37.527 { 00:17:37.527 "name": "BaseBdev3", 00:17:37.527 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:37.527 "is_configured": true, 00:17:37.527 "data_offset": 2048, 00:17:37.527 "data_size": 63488 00:17:37.527 }, 00:17:37.527 { 00:17:37.527 "name": "BaseBdev4", 00:17:37.527 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:37.527 "is_configured": true, 00:17:37.527 "data_offset": 2048, 00:17:37.527 "data_size": 63488 00:17:37.527 } 00:17:37.527 ] 00:17:37.527 }' 00:17:37.527 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.527 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.787 "name": "raid_bdev1", 00:17:37.787 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:37.787 "strip_size_kb": 64, 00:17:37.787 "state": "online", 00:17:37.787 "raid_level": "raid5f", 00:17:37.787 "superblock": true, 00:17:37.787 "num_base_bdevs": 4, 00:17:37.787 "num_base_bdevs_discovered": 3, 00:17:37.787 "num_base_bdevs_operational": 3, 00:17:37.787 "base_bdevs_list": [ 00:17:37.787 { 00:17:37.787 "name": null, 00:17:37.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.787 "is_configured": false, 00:17:37.787 "data_offset": 0, 00:17:37.787 "data_size": 63488 00:17:37.787 }, 00:17:37.787 { 00:17:37.787 "name": "BaseBdev2", 00:17:37.787 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:37.787 "is_configured": true, 00:17:37.787 "data_offset": 2048, 00:17:37.787 "data_size": 63488 00:17:37.787 }, 00:17:37.787 { 00:17:37.787 "name": "BaseBdev3", 00:17:37.787 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:37.787 "is_configured": true, 00:17:37.787 "data_offset": 2048, 00:17:37.787 "data_size": 63488 00:17:37.787 }, 00:17:37.787 { 00:17:37.787 "name": "BaseBdev4", 00:17:37.787 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:37.787 "is_configured": true, 00:17:37.787 "data_offset": 2048, 00:17:37.787 "data_size": 63488 00:17:37.787 } 00:17:37.787 ] 00:17:37.787 }' 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:37.787 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.050 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.050 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.050 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.050 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.050 [2024-09-30 12:34:49.706791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.050 [2024-09-30 12:34:49.719860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:38.050 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.050 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:38.050 [2024-09-30 12:34:49.728849] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.023 "name": "raid_bdev1", 00:17:39.023 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:39.023 "strip_size_kb": 64, 00:17:39.023 "state": "online", 00:17:39.023 "raid_level": "raid5f", 00:17:39.023 "superblock": true, 00:17:39.023 "num_base_bdevs": 4, 00:17:39.023 "num_base_bdevs_discovered": 4, 00:17:39.023 "num_base_bdevs_operational": 4, 00:17:39.023 "process": { 00:17:39.023 "type": "rebuild", 00:17:39.023 "target": "spare", 00:17:39.023 "progress": { 00:17:39.023 "blocks": 19200, 00:17:39.023 "percent": 10 00:17:39.023 } 00:17:39.023 }, 00:17:39.023 "base_bdevs_list": [ 00:17:39.023 { 00:17:39.023 "name": "spare", 00:17:39.023 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:39.023 "is_configured": true, 00:17:39.023 "data_offset": 2048, 00:17:39.023 "data_size": 63488 00:17:39.023 }, 00:17:39.023 { 00:17:39.023 "name": "BaseBdev2", 00:17:39.023 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:39.023 "is_configured": true, 00:17:39.023 "data_offset": 2048, 00:17:39.023 "data_size": 63488 00:17:39.023 }, 00:17:39.023 { 00:17:39.023 "name": "BaseBdev3", 00:17:39.023 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:39.023 "is_configured": true, 00:17:39.023 "data_offset": 2048, 00:17:39.023 "data_size": 63488 00:17:39.023 }, 00:17:39.023 { 00:17:39.023 "name": "BaseBdev4", 00:17:39.023 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:39.023 "is_configured": true, 00:17:39.023 "data_offset": 2048, 00:17:39.023 "data_size": 63488 00:17:39.023 } 00:17:39.023 ] 00:17:39.023 }' 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:39.023 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=635 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.023 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.283 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.283 "name": "raid_bdev1", 00:17:39.283 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:39.283 "strip_size_kb": 64, 00:17:39.284 "state": "online", 00:17:39.284 "raid_level": "raid5f", 00:17:39.284 "superblock": true, 00:17:39.284 "num_base_bdevs": 4, 00:17:39.284 "num_base_bdevs_discovered": 4, 00:17:39.284 "num_base_bdevs_operational": 4, 00:17:39.284 "process": { 00:17:39.284 "type": "rebuild", 00:17:39.284 "target": "spare", 00:17:39.284 "progress": { 00:17:39.284 "blocks": 21120, 00:17:39.284 "percent": 11 00:17:39.284 } 00:17:39.284 }, 00:17:39.284 "base_bdevs_list": [ 00:17:39.284 { 00:17:39.284 "name": "spare", 00:17:39.284 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:39.284 "is_configured": true, 00:17:39.284 "data_offset": 2048, 00:17:39.284 "data_size": 63488 00:17:39.284 }, 00:17:39.284 { 00:17:39.284 "name": "BaseBdev2", 00:17:39.284 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:39.284 "is_configured": true, 00:17:39.284 "data_offset": 2048, 00:17:39.284 "data_size": 63488 00:17:39.284 }, 00:17:39.284 { 00:17:39.284 "name": "BaseBdev3", 00:17:39.284 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:39.284 "is_configured": true, 00:17:39.284 "data_offset": 2048, 00:17:39.284 "data_size": 63488 00:17:39.284 }, 00:17:39.284 { 00:17:39.284 "name": "BaseBdev4", 00:17:39.284 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:39.284 "is_configured": true, 00:17:39.284 "data_offset": 2048, 00:17:39.284 "data_size": 63488 00:17:39.284 } 00:17:39.284 ] 00:17:39.284 }' 00:17:39.284 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.284 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.284 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.284 12:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.284 12:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.224 "name": "raid_bdev1", 00:17:40.224 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:40.224 "strip_size_kb": 64, 00:17:40.224 "state": "online", 00:17:40.224 "raid_level": "raid5f", 00:17:40.224 "superblock": true, 00:17:40.224 "num_base_bdevs": 4, 00:17:40.224 "num_base_bdevs_discovered": 4, 00:17:40.224 "num_base_bdevs_operational": 4, 00:17:40.224 "process": { 00:17:40.224 "type": "rebuild", 00:17:40.224 "target": "spare", 00:17:40.224 "progress": { 00:17:40.224 "blocks": 44160, 00:17:40.224 "percent": 23 00:17:40.224 } 00:17:40.224 }, 00:17:40.224 "base_bdevs_list": [ 00:17:40.224 { 00:17:40.224 "name": "spare", 00:17:40.224 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:40.224 "is_configured": true, 00:17:40.224 "data_offset": 2048, 00:17:40.224 "data_size": 63488 00:17:40.224 }, 00:17:40.224 { 00:17:40.224 "name": "BaseBdev2", 00:17:40.224 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:40.224 "is_configured": true, 00:17:40.224 "data_offset": 2048, 00:17:40.224 "data_size": 63488 00:17:40.224 }, 00:17:40.224 { 00:17:40.224 "name": "BaseBdev3", 00:17:40.224 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:40.224 "is_configured": true, 00:17:40.224 "data_offset": 2048, 00:17:40.224 "data_size": 63488 00:17:40.224 }, 00:17:40.224 { 00:17:40.224 "name": "BaseBdev4", 00:17:40.224 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:40.224 "is_configured": true, 00:17:40.224 "data_offset": 2048, 00:17:40.224 "data_size": 63488 00:17:40.224 } 00:17:40.224 ] 00:17:40.224 }' 00:17:40.224 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.484 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.484 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.484 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.484 12:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.425 "name": "raid_bdev1", 00:17:41.425 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:41.425 "strip_size_kb": 64, 00:17:41.425 "state": "online", 00:17:41.425 "raid_level": "raid5f", 00:17:41.425 "superblock": true, 00:17:41.425 "num_base_bdevs": 4, 00:17:41.425 "num_base_bdevs_discovered": 4, 00:17:41.425 "num_base_bdevs_operational": 4, 00:17:41.425 "process": { 00:17:41.425 "type": "rebuild", 00:17:41.425 "target": "spare", 00:17:41.425 "progress": { 00:17:41.425 "blocks": 65280, 00:17:41.425 "percent": 34 00:17:41.425 } 00:17:41.425 }, 00:17:41.425 "base_bdevs_list": [ 00:17:41.425 { 00:17:41.425 "name": "spare", 00:17:41.425 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:41.425 "is_configured": true, 00:17:41.425 "data_offset": 2048, 00:17:41.425 "data_size": 63488 00:17:41.425 }, 00:17:41.425 { 00:17:41.425 "name": "BaseBdev2", 00:17:41.425 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:41.425 "is_configured": true, 00:17:41.425 "data_offset": 2048, 00:17:41.425 "data_size": 63488 00:17:41.425 }, 00:17:41.425 { 00:17:41.425 "name": "BaseBdev3", 00:17:41.425 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:41.425 "is_configured": true, 00:17:41.425 "data_offset": 2048, 00:17:41.425 "data_size": 63488 00:17:41.425 }, 00:17:41.425 { 00:17:41.425 "name": "BaseBdev4", 00:17:41.425 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:41.425 "is_configured": true, 00:17:41.425 "data_offset": 2048, 00:17:41.425 "data_size": 63488 00:17:41.425 } 00:17:41.425 ] 00:17:41.425 }' 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.425 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.685 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.685 12:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.625 "name": "raid_bdev1", 00:17:42.625 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:42.625 "strip_size_kb": 64, 00:17:42.625 "state": "online", 00:17:42.625 "raid_level": "raid5f", 00:17:42.625 "superblock": true, 00:17:42.625 "num_base_bdevs": 4, 00:17:42.625 "num_base_bdevs_discovered": 4, 00:17:42.625 "num_base_bdevs_operational": 4, 00:17:42.625 "process": { 00:17:42.625 "type": "rebuild", 00:17:42.625 "target": "spare", 00:17:42.625 "progress": { 00:17:42.625 "blocks": 88320, 00:17:42.625 "percent": 46 00:17:42.625 } 00:17:42.625 }, 00:17:42.625 "base_bdevs_list": [ 00:17:42.625 { 00:17:42.625 "name": "spare", 00:17:42.625 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:42.625 "is_configured": true, 00:17:42.625 "data_offset": 2048, 00:17:42.625 "data_size": 63488 00:17:42.625 }, 00:17:42.625 { 00:17:42.625 "name": "BaseBdev2", 00:17:42.625 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:42.625 "is_configured": true, 00:17:42.625 "data_offset": 2048, 00:17:42.625 "data_size": 63488 00:17:42.625 }, 00:17:42.625 { 00:17:42.625 "name": "BaseBdev3", 00:17:42.625 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:42.625 "is_configured": true, 00:17:42.625 "data_offset": 2048, 00:17:42.625 "data_size": 63488 00:17:42.625 }, 00:17:42.625 { 00:17:42.625 "name": "BaseBdev4", 00:17:42.625 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:42.625 "is_configured": true, 00:17:42.625 "data_offset": 2048, 00:17:42.625 "data_size": 63488 00:17:42.625 } 00:17:42.625 ] 00:17:42.625 }' 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.625 12:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.007 "name": "raid_bdev1", 00:17:44.007 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:44.007 "strip_size_kb": 64, 00:17:44.007 "state": "online", 00:17:44.007 "raid_level": "raid5f", 00:17:44.007 "superblock": true, 00:17:44.007 "num_base_bdevs": 4, 00:17:44.007 "num_base_bdevs_discovered": 4, 00:17:44.007 "num_base_bdevs_operational": 4, 00:17:44.007 "process": { 00:17:44.007 "type": "rebuild", 00:17:44.007 "target": "spare", 00:17:44.007 "progress": { 00:17:44.007 "blocks": 109440, 00:17:44.007 "percent": 57 00:17:44.007 } 00:17:44.007 }, 00:17:44.007 "base_bdevs_list": [ 00:17:44.007 { 00:17:44.007 "name": "spare", 00:17:44.007 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:44.007 "is_configured": true, 00:17:44.007 "data_offset": 2048, 00:17:44.007 "data_size": 63488 00:17:44.007 }, 00:17:44.007 { 00:17:44.007 "name": "BaseBdev2", 00:17:44.007 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:44.007 "is_configured": true, 00:17:44.007 "data_offset": 2048, 00:17:44.007 "data_size": 63488 00:17:44.007 }, 00:17:44.007 { 00:17:44.007 "name": "BaseBdev3", 00:17:44.007 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:44.007 "is_configured": true, 00:17:44.007 "data_offset": 2048, 00:17:44.007 "data_size": 63488 00:17:44.007 }, 00:17:44.007 { 00:17:44.007 "name": "BaseBdev4", 00:17:44.007 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:44.007 "is_configured": true, 00:17:44.007 "data_offset": 2048, 00:17:44.007 "data_size": 63488 00:17:44.007 } 00:17:44.007 ] 00:17:44.007 }' 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.007 12:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.948 "name": "raid_bdev1", 00:17:44.948 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:44.948 "strip_size_kb": 64, 00:17:44.948 "state": "online", 00:17:44.948 "raid_level": "raid5f", 00:17:44.948 "superblock": true, 00:17:44.948 "num_base_bdevs": 4, 00:17:44.948 "num_base_bdevs_discovered": 4, 00:17:44.948 "num_base_bdevs_operational": 4, 00:17:44.948 "process": { 00:17:44.948 "type": "rebuild", 00:17:44.948 "target": "spare", 00:17:44.948 "progress": { 00:17:44.948 "blocks": 132480, 00:17:44.948 "percent": 69 00:17:44.948 } 00:17:44.948 }, 00:17:44.948 "base_bdevs_list": [ 00:17:44.948 { 00:17:44.948 "name": "spare", 00:17:44.948 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:44.948 "is_configured": true, 00:17:44.948 "data_offset": 2048, 00:17:44.948 "data_size": 63488 00:17:44.948 }, 00:17:44.948 { 00:17:44.948 "name": "BaseBdev2", 00:17:44.948 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:44.948 "is_configured": true, 00:17:44.948 "data_offset": 2048, 00:17:44.948 "data_size": 63488 00:17:44.948 }, 00:17:44.948 { 00:17:44.948 "name": "BaseBdev3", 00:17:44.948 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:44.948 "is_configured": true, 00:17:44.948 "data_offset": 2048, 00:17:44.948 "data_size": 63488 00:17:44.948 }, 00:17:44.948 { 00:17:44.948 "name": "BaseBdev4", 00:17:44.948 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:44.948 "is_configured": true, 00:17:44.948 "data_offset": 2048, 00:17:44.948 "data_size": 63488 00:17:44.948 } 00:17:44.948 ] 00:17:44.948 }' 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.948 12:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.331 "name": "raid_bdev1", 00:17:46.331 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:46.331 "strip_size_kb": 64, 00:17:46.331 "state": "online", 00:17:46.331 "raid_level": "raid5f", 00:17:46.331 "superblock": true, 00:17:46.331 "num_base_bdevs": 4, 00:17:46.331 "num_base_bdevs_discovered": 4, 00:17:46.331 "num_base_bdevs_operational": 4, 00:17:46.331 "process": { 00:17:46.331 "type": "rebuild", 00:17:46.331 "target": "spare", 00:17:46.331 "progress": { 00:17:46.331 "blocks": 153600, 00:17:46.331 "percent": 80 00:17:46.331 } 00:17:46.331 }, 00:17:46.331 "base_bdevs_list": [ 00:17:46.331 { 00:17:46.331 "name": "spare", 00:17:46.331 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:46.331 "is_configured": true, 00:17:46.331 "data_offset": 2048, 00:17:46.331 "data_size": 63488 00:17:46.331 }, 00:17:46.331 { 00:17:46.331 "name": "BaseBdev2", 00:17:46.331 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:46.331 "is_configured": true, 00:17:46.331 "data_offset": 2048, 00:17:46.331 "data_size": 63488 00:17:46.331 }, 00:17:46.331 { 00:17:46.331 "name": "BaseBdev3", 00:17:46.331 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:46.331 "is_configured": true, 00:17:46.331 "data_offset": 2048, 00:17:46.331 "data_size": 63488 00:17:46.331 }, 00:17:46.331 { 00:17:46.331 "name": "BaseBdev4", 00:17:46.331 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:46.331 "is_configured": true, 00:17:46.331 "data_offset": 2048, 00:17:46.331 "data_size": 63488 00:17:46.331 } 00:17:46.331 ] 00:17:46.331 }' 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.331 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.332 12:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:47.284 12:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.284 12:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.284 12:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.284 12:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.284 12:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.284 12:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.284 12:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.284 12:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.284 12:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.284 12:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.284 12:34:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.284 12:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.284 "name": "raid_bdev1", 00:17:47.284 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:47.284 "strip_size_kb": 64, 00:17:47.284 "state": "online", 00:17:47.284 "raid_level": "raid5f", 00:17:47.284 "superblock": true, 00:17:47.284 "num_base_bdevs": 4, 00:17:47.284 "num_base_bdevs_discovered": 4, 00:17:47.284 "num_base_bdevs_operational": 4, 00:17:47.284 "process": { 00:17:47.284 "type": "rebuild", 00:17:47.284 "target": "spare", 00:17:47.284 "progress": { 00:17:47.284 "blocks": 174720, 00:17:47.284 "percent": 91 00:17:47.284 } 00:17:47.284 }, 00:17:47.284 "base_bdevs_list": [ 00:17:47.284 { 00:17:47.284 "name": "spare", 00:17:47.284 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:47.284 "is_configured": true, 00:17:47.284 "data_offset": 2048, 00:17:47.284 "data_size": 63488 00:17:47.284 }, 00:17:47.284 { 00:17:47.284 "name": "BaseBdev2", 00:17:47.284 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:47.284 "is_configured": true, 00:17:47.284 "data_offset": 2048, 00:17:47.284 "data_size": 63488 00:17:47.284 }, 00:17:47.284 { 00:17:47.284 "name": "BaseBdev3", 00:17:47.284 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:47.284 "is_configured": true, 00:17:47.284 "data_offset": 2048, 00:17:47.284 "data_size": 63488 00:17:47.284 }, 00:17:47.284 { 00:17:47.284 "name": "BaseBdev4", 00:17:47.284 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:47.284 "is_configured": true, 00:17:47.284 "data_offset": 2048, 00:17:47.284 "data_size": 63488 00:17:47.284 } 00:17:47.284 ] 00:17:47.284 }' 00:17:47.284 12:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.284 12:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.284 12:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.284 12:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.284 12:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:48.225 [2024-09-30 12:34:59.769484] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:48.225 [2024-09-30 12:34:59.769597] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:48.225 [2024-09-30 12:34:59.769749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.225 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:48.225 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.225 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.225 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.225 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.225 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.225 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.225 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.225 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.484 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.484 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.484 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.484 "name": "raid_bdev1", 00:17:48.484 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:48.484 "strip_size_kb": 64, 00:17:48.484 "state": "online", 00:17:48.484 "raid_level": "raid5f", 00:17:48.484 "superblock": true, 00:17:48.484 "num_base_bdevs": 4, 00:17:48.484 "num_base_bdevs_discovered": 4, 00:17:48.484 "num_base_bdevs_operational": 4, 00:17:48.484 "base_bdevs_list": [ 00:17:48.484 { 00:17:48.484 "name": "spare", 00:17:48.484 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:48.484 "is_configured": true, 00:17:48.484 "data_offset": 2048, 00:17:48.484 "data_size": 63488 00:17:48.484 }, 00:17:48.484 { 00:17:48.484 "name": "BaseBdev2", 00:17:48.484 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:48.484 "is_configured": true, 00:17:48.484 "data_offset": 2048, 00:17:48.484 "data_size": 63488 00:17:48.484 }, 00:17:48.484 { 00:17:48.484 "name": "BaseBdev3", 00:17:48.484 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:48.484 "is_configured": true, 00:17:48.484 "data_offset": 2048, 00:17:48.484 "data_size": 63488 00:17:48.484 }, 00:17:48.484 { 00:17:48.484 "name": "BaseBdev4", 00:17:48.484 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:48.484 "is_configured": true, 00:17:48.484 "data_offset": 2048, 00:17:48.484 "data_size": 63488 00:17:48.485 } 00:17:48.485 ] 00:17:48.485 }' 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.485 "name": "raid_bdev1", 00:17:48.485 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:48.485 "strip_size_kb": 64, 00:17:48.485 "state": "online", 00:17:48.485 "raid_level": "raid5f", 00:17:48.485 "superblock": true, 00:17:48.485 "num_base_bdevs": 4, 00:17:48.485 "num_base_bdevs_discovered": 4, 00:17:48.485 "num_base_bdevs_operational": 4, 00:17:48.485 "base_bdevs_list": [ 00:17:48.485 { 00:17:48.485 "name": "spare", 00:17:48.485 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:48.485 "is_configured": true, 00:17:48.485 "data_offset": 2048, 00:17:48.485 "data_size": 63488 00:17:48.485 }, 00:17:48.485 { 00:17:48.485 "name": "BaseBdev2", 00:17:48.485 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:48.485 "is_configured": true, 00:17:48.485 "data_offset": 2048, 00:17:48.485 "data_size": 63488 00:17:48.485 }, 00:17:48.485 { 00:17:48.485 "name": "BaseBdev3", 00:17:48.485 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:48.485 "is_configured": true, 00:17:48.485 "data_offset": 2048, 00:17:48.485 "data_size": 63488 00:17:48.485 }, 00:17:48.485 { 00:17:48.485 "name": "BaseBdev4", 00:17:48.485 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:48.485 "is_configured": true, 00:17:48.485 "data_offset": 2048, 00:17:48.485 "data_size": 63488 00:17:48.485 } 00:17:48.485 ] 00:17:48.485 }' 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.485 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.744 "name": "raid_bdev1", 00:17:48.744 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:48.744 "strip_size_kb": 64, 00:17:48.744 "state": "online", 00:17:48.744 "raid_level": "raid5f", 00:17:48.744 "superblock": true, 00:17:48.744 "num_base_bdevs": 4, 00:17:48.744 "num_base_bdevs_discovered": 4, 00:17:48.744 "num_base_bdevs_operational": 4, 00:17:48.744 "base_bdevs_list": [ 00:17:48.744 { 00:17:48.744 "name": "spare", 00:17:48.744 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:48.744 "is_configured": true, 00:17:48.744 "data_offset": 2048, 00:17:48.744 "data_size": 63488 00:17:48.744 }, 00:17:48.744 { 00:17:48.744 "name": "BaseBdev2", 00:17:48.744 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:48.744 "is_configured": true, 00:17:48.744 "data_offset": 2048, 00:17:48.744 "data_size": 63488 00:17:48.744 }, 00:17:48.744 { 00:17:48.744 "name": "BaseBdev3", 00:17:48.744 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:48.744 "is_configured": true, 00:17:48.744 "data_offset": 2048, 00:17:48.744 "data_size": 63488 00:17:48.744 }, 00:17:48.744 { 00:17:48.744 "name": "BaseBdev4", 00:17:48.744 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:48.744 "is_configured": true, 00:17:48.744 "data_offset": 2048, 00:17:48.744 "data_size": 63488 00:17:48.744 } 00:17:48.744 ] 00:17:48.744 }' 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.744 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.002 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:49.003 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.003 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.003 [2024-09-30 12:35:00.844260] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.003 [2024-09-30 12:35:00.844291] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.003 [2024-09-30 12:35:00.844361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.003 [2024-09-30 12:35:00.844445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.003 [2024-09-30 12:35:00.844457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:49.003 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.003 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.003 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:49.003 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.003 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.003 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.262 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:49.263 12:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:49.263 /dev/nbd0 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:49.263 1+0 records in 00:17:49.263 1+0 records out 00:17:49.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000844698 s, 4.8 MB/s 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:49.263 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:49.524 /dev/nbd1 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:49.524 1+0 records in 00:17:49.524 1+0 records out 00:17:49.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399327 s, 10.3 MB/s 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:49.524 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.785 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:50.044 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:50.044 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:50.045 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:50.045 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.045 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.045 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:50.045 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:50.045 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.045 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:50.045 12:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.305 [2024-09-30 12:35:02.047066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:50.305 [2024-09-30 12:35:02.047197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.305 [2024-09-30 12:35:02.047239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:50.305 [2024-09-30 12:35:02.047248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.305 [2024-09-30 12:35:02.049444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.305 [2024-09-30 12:35:02.049484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:50.305 [2024-09-30 12:35:02.049574] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:50.305 [2024-09-30 12:35:02.049621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.305 [2024-09-30 12:35:02.049766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:50.305 [2024-09-30 12:35:02.049852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.305 [2024-09-30 12:35:02.049938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:50.305 spare 00:17:50.305 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.306 [2024-09-30 12:35:02.149829] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:50.306 [2024-09-30 12:35:02.149859] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:50.306 [2024-09-30 12:35:02.150116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:50.306 [2024-09-30 12:35:02.157082] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:50.306 [2024-09-30 12:35:02.157151] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:50.306 [2024-09-30 12:35:02.157337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.306 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.566 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.566 "name": "raid_bdev1", 00:17:50.566 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:50.566 "strip_size_kb": 64, 00:17:50.566 "state": "online", 00:17:50.567 "raid_level": "raid5f", 00:17:50.567 "superblock": true, 00:17:50.567 "num_base_bdevs": 4, 00:17:50.567 "num_base_bdevs_discovered": 4, 00:17:50.567 "num_base_bdevs_operational": 4, 00:17:50.567 "base_bdevs_list": [ 00:17:50.567 { 00:17:50.567 "name": "spare", 00:17:50.567 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:50.567 "is_configured": true, 00:17:50.567 "data_offset": 2048, 00:17:50.567 "data_size": 63488 00:17:50.567 }, 00:17:50.567 { 00:17:50.567 "name": "BaseBdev2", 00:17:50.567 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:50.567 "is_configured": true, 00:17:50.567 "data_offset": 2048, 00:17:50.567 "data_size": 63488 00:17:50.567 }, 00:17:50.567 { 00:17:50.567 "name": "BaseBdev3", 00:17:50.567 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:50.567 "is_configured": true, 00:17:50.567 "data_offset": 2048, 00:17:50.567 "data_size": 63488 00:17:50.567 }, 00:17:50.567 { 00:17:50.567 "name": "BaseBdev4", 00:17:50.567 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:50.567 "is_configured": true, 00:17:50.567 "data_offset": 2048, 00:17:50.567 "data_size": 63488 00:17:50.567 } 00:17:50.567 ] 00:17:50.567 }' 00:17:50.567 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.567 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.827 "name": "raid_bdev1", 00:17:50.827 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:50.827 "strip_size_kb": 64, 00:17:50.827 "state": "online", 00:17:50.827 "raid_level": "raid5f", 00:17:50.827 "superblock": true, 00:17:50.827 "num_base_bdevs": 4, 00:17:50.827 "num_base_bdevs_discovered": 4, 00:17:50.827 "num_base_bdevs_operational": 4, 00:17:50.827 "base_bdevs_list": [ 00:17:50.827 { 00:17:50.827 "name": "spare", 00:17:50.827 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:50.827 "is_configured": true, 00:17:50.827 "data_offset": 2048, 00:17:50.827 "data_size": 63488 00:17:50.827 }, 00:17:50.827 { 00:17:50.827 "name": "BaseBdev2", 00:17:50.827 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:50.827 "is_configured": true, 00:17:50.827 "data_offset": 2048, 00:17:50.827 "data_size": 63488 00:17:50.827 }, 00:17:50.827 { 00:17:50.827 "name": "BaseBdev3", 00:17:50.827 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:50.827 "is_configured": true, 00:17:50.827 "data_offset": 2048, 00:17:50.827 "data_size": 63488 00:17:50.827 }, 00:17:50.827 { 00:17:50.827 "name": "BaseBdev4", 00:17:50.827 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:50.827 "is_configured": true, 00:17:50.827 "data_offset": 2048, 00:17:50.827 "data_size": 63488 00:17:50.827 } 00:17:50.827 ] 00:17:50.827 }' 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.827 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.088 [2024-09-30 12:35:02.800322] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.088 "name": "raid_bdev1", 00:17:51.088 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:51.088 "strip_size_kb": 64, 00:17:51.088 "state": "online", 00:17:51.088 "raid_level": "raid5f", 00:17:51.088 "superblock": true, 00:17:51.088 "num_base_bdevs": 4, 00:17:51.088 "num_base_bdevs_discovered": 3, 00:17:51.088 "num_base_bdevs_operational": 3, 00:17:51.088 "base_bdevs_list": [ 00:17:51.088 { 00:17:51.088 "name": null, 00:17:51.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.088 "is_configured": false, 00:17:51.088 "data_offset": 0, 00:17:51.088 "data_size": 63488 00:17:51.088 }, 00:17:51.088 { 00:17:51.088 "name": "BaseBdev2", 00:17:51.088 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:51.088 "is_configured": true, 00:17:51.088 "data_offset": 2048, 00:17:51.088 "data_size": 63488 00:17:51.088 }, 00:17:51.088 { 00:17:51.088 "name": "BaseBdev3", 00:17:51.088 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:51.088 "is_configured": true, 00:17:51.088 "data_offset": 2048, 00:17:51.088 "data_size": 63488 00:17:51.088 }, 00:17:51.088 { 00:17:51.088 "name": "BaseBdev4", 00:17:51.088 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:51.088 "is_configured": true, 00:17:51.088 "data_offset": 2048, 00:17:51.088 "data_size": 63488 00:17:51.088 } 00:17:51.088 ] 00:17:51.088 }' 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.088 12:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.658 12:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:51.658 12:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.658 12:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.658 [2024-09-30 12:35:03.287614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.658 [2024-09-30 12:35:03.287812] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.658 [2024-09-30 12:35:03.287878] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:51.658 [2024-09-30 12:35:03.287932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.658 [2024-09-30 12:35:03.301235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:51.658 12:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.658 12:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:51.658 [2024-09-30 12:35:03.309894] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.597 "name": "raid_bdev1", 00:17:52.597 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:52.597 "strip_size_kb": 64, 00:17:52.597 "state": "online", 00:17:52.597 "raid_level": "raid5f", 00:17:52.597 "superblock": true, 00:17:52.597 "num_base_bdevs": 4, 00:17:52.597 "num_base_bdevs_discovered": 4, 00:17:52.597 "num_base_bdevs_operational": 4, 00:17:52.597 "process": { 00:17:52.597 "type": "rebuild", 00:17:52.597 "target": "spare", 00:17:52.597 "progress": { 00:17:52.597 "blocks": 19200, 00:17:52.597 "percent": 10 00:17:52.597 } 00:17:52.597 }, 00:17:52.597 "base_bdevs_list": [ 00:17:52.597 { 00:17:52.597 "name": "spare", 00:17:52.597 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:52.597 "is_configured": true, 00:17:52.597 "data_offset": 2048, 00:17:52.597 "data_size": 63488 00:17:52.597 }, 00:17:52.597 { 00:17:52.597 "name": "BaseBdev2", 00:17:52.597 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:52.597 "is_configured": true, 00:17:52.597 "data_offset": 2048, 00:17:52.597 "data_size": 63488 00:17:52.597 }, 00:17:52.597 { 00:17:52.597 "name": "BaseBdev3", 00:17:52.597 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:52.597 "is_configured": true, 00:17:52.597 "data_offset": 2048, 00:17:52.597 "data_size": 63488 00:17:52.597 }, 00:17:52.597 { 00:17:52.597 "name": "BaseBdev4", 00:17:52.597 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:52.597 "is_configured": true, 00:17:52.597 "data_offset": 2048, 00:17:52.597 "data_size": 63488 00:17:52.597 } 00:17:52.597 ] 00:17:52.597 }' 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.597 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.597 [2024-09-30 12:35:04.468464] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.858 [2024-09-30 12:35:04.515302] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:52.858 [2024-09-30 12:35:04.515365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.858 [2024-09-30 12:35:04.515381] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.858 [2024-09-30 12:35:04.515390] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.858 "name": "raid_bdev1", 00:17:52.858 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:52.858 "strip_size_kb": 64, 00:17:52.858 "state": "online", 00:17:52.858 "raid_level": "raid5f", 00:17:52.858 "superblock": true, 00:17:52.858 "num_base_bdevs": 4, 00:17:52.858 "num_base_bdevs_discovered": 3, 00:17:52.858 "num_base_bdevs_operational": 3, 00:17:52.858 "base_bdevs_list": [ 00:17:52.858 { 00:17:52.858 "name": null, 00:17:52.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.858 "is_configured": false, 00:17:52.858 "data_offset": 0, 00:17:52.858 "data_size": 63488 00:17:52.858 }, 00:17:52.858 { 00:17:52.858 "name": "BaseBdev2", 00:17:52.858 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:52.858 "is_configured": true, 00:17:52.858 "data_offset": 2048, 00:17:52.858 "data_size": 63488 00:17:52.858 }, 00:17:52.858 { 00:17:52.858 "name": "BaseBdev3", 00:17:52.858 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:52.858 "is_configured": true, 00:17:52.858 "data_offset": 2048, 00:17:52.858 "data_size": 63488 00:17:52.858 }, 00:17:52.858 { 00:17:52.858 "name": "BaseBdev4", 00:17:52.858 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:52.858 "is_configured": true, 00:17:52.858 "data_offset": 2048, 00:17:52.858 "data_size": 63488 00:17:52.858 } 00:17:52.858 ] 00:17:52.858 }' 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.858 12:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.118 12:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:53.118 12:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.118 12:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.118 [2024-09-30 12:35:05.009755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:53.118 [2024-09-30 12:35:05.009881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.118 [2024-09-30 12:35:05.009923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:53.118 [2024-09-30 12:35:05.009952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.118 [2024-09-30 12:35:05.010422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.118 [2024-09-30 12:35:05.010482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:53.118 [2024-09-30 12:35:05.010579] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:53.118 [2024-09-30 12:35:05.010618] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:53.118 [2024-09-30 12:35:05.010655] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:53.118 [2024-09-30 12:35:05.010723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.378 [2024-09-30 12:35:05.023660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:53.378 spare 00:17:53.378 12:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.378 12:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:53.378 [2024-09-30 12:35:05.032584] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.317 "name": "raid_bdev1", 00:17:54.317 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:54.317 "strip_size_kb": 64, 00:17:54.317 "state": "online", 00:17:54.317 "raid_level": "raid5f", 00:17:54.317 "superblock": true, 00:17:54.317 "num_base_bdevs": 4, 00:17:54.317 "num_base_bdevs_discovered": 4, 00:17:54.317 "num_base_bdevs_operational": 4, 00:17:54.317 "process": { 00:17:54.317 "type": "rebuild", 00:17:54.317 "target": "spare", 00:17:54.317 "progress": { 00:17:54.317 "blocks": 19200, 00:17:54.317 "percent": 10 00:17:54.317 } 00:17:54.317 }, 00:17:54.317 "base_bdevs_list": [ 00:17:54.317 { 00:17:54.317 "name": "spare", 00:17:54.317 "uuid": "ff0ed7f6-5577-5544-9ce3-6e89b68ef151", 00:17:54.317 "is_configured": true, 00:17:54.317 "data_offset": 2048, 00:17:54.317 "data_size": 63488 00:17:54.317 }, 00:17:54.317 { 00:17:54.317 "name": "BaseBdev2", 00:17:54.317 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:54.317 "is_configured": true, 00:17:54.317 "data_offset": 2048, 00:17:54.317 "data_size": 63488 00:17:54.317 }, 00:17:54.317 { 00:17:54.317 "name": "BaseBdev3", 00:17:54.317 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:54.317 "is_configured": true, 00:17:54.317 "data_offset": 2048, 00:17:54.317 "data_size": 63488 00:17:54.317 }, 00:17:54.317 { 00:17:54.317 "name": "BaseBdev4", 00:17:54.317 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:54.317 "is_configured": true, 00:17:54.317 "data_offset": 2048, 00:17:54.317 "data_size": 63488 00:17:54.317 } 00:17:54.317 ] 00:17:54.317 }' 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.317 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.317 [2024-09-30 12:35:06.167246] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.577 [2024-09-30 12:35:06.237997] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:54.577 [2024-09-30 12:35:06.238050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.577 [2024-09-30 12:35:06.238068] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.577 [2024-09-30 12:35:06.238075] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.577 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.577 "name": "raid_bdev1", 00:17:54.577 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:54.577 "strip_size_kb": 64, 00:17:54.577 "state": "online", 00:17:54.577 "raid_level": "raid5f", 00:17:54.577 "superblock": true, 00:17:54.577 "num_base_bdevs": 4, 00:17:54.577 "num_base_bdevs_discovered": 3, 00:17:54.578 "num_base_bdevs_operational": 3, 00:17:54.578 "base_bdevs_list": [ 00:17:54.578 { 00:17:54.578 "name": null, 00:17:54.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.578 "is_configured": false, 00:17:54.578 "data_offset": 0, 00:17:54.578 "data_size": 63488 00:17:54.578 }, 00:17:54.578 { 00:17:54.578 "name": "BaseBdev2", 00:17:54.578 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:54.578 "is_configured": true, 00:17:54.578 "data_offset": 2048, 00:17:54.578 "data_size": 63488 00:17:54.578 }, 00:17:54.578 { 00:17:54.578 "name": "BaseBdev3", 00:17:54.578 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:54.578 "is_configured": true, 00:17:54.578 "data_offset": 2048, 00:17:54.578 "data_size": 63488 00:17:54.578 }, 00:17:54.578 { 00:17:54.578 "name": "BaseBdev4", 00:17:54.578 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:54.578 "is_configured": true, 00:17:54.578 "data_offset": 2048, 00:17:54.578 "data_size": 63488 00:17:54.578 } 00:17:54.578 ] 00:17:54.578 }' 00:17:54.578 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.578 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.837 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.837 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.837 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.837 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.837 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.837 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.837 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.837 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.837 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.837 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.098 "name": "raid_bdev1", 00:17:55.098 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:55.098 "strip_size_kb": 64, 00:17:55.098 "state": "online", 00:17:55.098 "raid_level": "raid5f", 00:17:55.098 "superblock": true, 00:17:55.098 "num_base_bdevs": 4, 00:17:55.098 "num_base_bdevs_discovered": 3, 00:17:55.098 "num_base_bdevs_operational": 3, 00:17:55.098 "base_bdevs_list": [ 00:17:55.098 { 00:17:55.098 "name": null, 00:17:55.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.098 "is_configured": false, 00:17:55.098 "data_offset": 0, 00:17:55.098 "data_size": 63488 00:17:55.098 }, 00:17:55.098 { 00:17:55.098 "name": "BaseBdev2", 00:17:55.098 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:55.098 "is_configured": true, 00:17:55.098 "data_offset": 2048, 00:17:55.098 "data_size": 63488 00:17:55.098 }, 00:17:55.098 { 00:17:55.098 "name": "BaseBdev3", 00:17:55.098 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:55.098 "is_configured": true, 00:17:55.098 "data_offset": 2048, 00:17:55.098 "data_size": 63488 00:17:55.098 }, 00:17:55.098 { 00:17:55.098 "name": "BaseBdev4", 00:17:55.098 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:55.098 "is_configured": true, 00:17:55.098 "data_offset": 2048, 00:17:55.098 "data_size": 63488 00:17:55.098 } 00:17:55.098 ] 00:17:55.098 }' 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.098 [2024-09-30 12:35:06.884159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:55.098 [2024-09-30 12:35:06.884257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.098 [2024-09-30 12:35:06.884283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:55.098 [2024-09-30 12:35:06.884292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.098 [2024-09-30 12:35:06.884712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.098 [2024-09-30 12:35:06.884729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:55.098 [2024-09-30 12:35:06.884812] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:55.098 [2024-09-30 12:35:06.884826] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:55.098 [2024-09-30 12:35:06.884838] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:55.098 [2024-09-30 12:35:06.884847] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:55.098 BaseBdev1 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.098 12:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.037 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.296 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.296 "name": "raid_bdev1", 00:17:56.296 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:56.296 "strip_size_kb": 64, 00:17:56.296 "state": "online", 00:17:56.296 "raid_level": "raid5f", 00:17:56.296 "superblock": true, 00:17:56.296 "num_base_bdevs": 4, 00:17:56.296 "num_base_bdevs_discovered": 3, 00:17:56.296 "num_base_bdevs_operational": 3, 00:17:56.296 "base_bdevs_list": [ 00:17:56.296 { 00:17:56.296 "name": null, 00:17:56.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.296 "is_configured": false, 00:17:56.296 "data_offset": 0, 00:17:56.296 "data_size": 63488 00:17:56.296 }, 00:17:56.296 { 00:17:56.296 "name": "BaseBdev2", 00:17:56.296 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:56.296 "is_configured": true, 00:17:56.296 "data_offset": 2048, 00:17:56.296 "data_size": 63488 00:17:56.296 }, 00:17:56.296 { 00:17:56.296 "name": "BaseBdev3", 00:17:56.296 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:56.296 "is_configured": true, 00:17:56.296 "data_offset": 2048, 00:17:56.296 "data_size": 63488 00:17:56.296 }, 00:17:56.296 { 00:17:56.296 "name": "BaseBdev4", 00:17:56.296 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:56.296 "is_configured": true, 00:17:56.296 "data_offset": 2048, 00:17:56.296 "data_size": 63488 00:17:56.296 } 00:17:56.296 ] 00:17:56.296 }' 00:17:56.296 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.296 12:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.556 "name": "raid_bdev1", 00:17:56.556 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:56.556 "strip_size_kb": 64, 00:17:56.556 "state": "online", 00:17:56.556 "raid_level": "raid5f", 00:17:56.556 "superblock": true, 00:17:56.556 "num_base_bdevs": 4, 00:17:56.556 "num_base_bdevs_discovered": 3, 00:17:56.556 "num_base_bdevs_operational": 3, 00:17:56.556 "base_bdevs_list": [ 00:17:56.556 { 00:17:56.556 "name": null, 00:17:56.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.556 "is_configured": false, 00:17:56.556 "data_offset": 0, 00:17:56.556 "data_size": 63488 00:17:56.556 }, 00:17:56.556 { 00:17:56.556 "name": "BaseBdev2", 00:17:56.556 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:56.556 "is_configured": true, 00:17:56.556 "data_offset": 2048, 00:17:56.556 "data_size": 63488 00:17:56.556 }, 00:17:56.556 { 00:17:56.556 "name": "BaseBdev3", 00:17:56.556 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:56.556 "is_configured": true, 00:17:56.556 "data_offset": 2048, 00:17:56.556 "data_size": 63488 00:17:56.556 }, 00:17:56.556 { 00:17:56.556 "name": "BaseBdev4", 00:17:56.556 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:56.556 "is_configured": true, 00:17:56.556 "data_offset": 2048, 00:17:56.556 "data_size": 63488 00:17:56.556 } 00:17:56.556 ] 00:17:56.556 }' 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.556 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.816 [2024-09-30 12:35:08.505370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.816 [2024-09-30 12:35:08.505566] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:56.816 [2024-09-30 12:35:08.505621] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:56.816 request: 00:17:56.816 { 00:17:56.816 "base_bdev": "BaseBdev1", 00:17:56.816 "raid_bdev": "raid_bdev1", 00:17:56.816 "method": "bdev_raid_add_base_bdev", 00:17:56.816 "req_id": 1 00:17:56.816 } 00:17:56.816 Got JSON-RPC error response 00:17:56.816 response: 00:17:56.816 { 00:17:56.816 "code": -22, 00:17:56.816 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:56.816 } 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:56.816 12:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.755 "name": "raid_bdev1", 00:17:57.755 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:57.755 "strip_size_kb": 64, 00:17:57.755 "state": "online", 00:17:57.755 "raid_level": "raid5f", 00:17:57.755 "superblock": true, 00:17:57.755 "num_base_bdevs": 4, 00:17:57.755 "num_base_bdevs_discovered": 3, 00:17:57.755 "num_base_bdevs_operational": 3, 00:17:57.755 "base_bdevs_list": [ 00:17:57.755 { 00:17:57.755 "name": null, 00:17:57.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.755 "is_configured": false, 00:17:57.755 "data_offset": 0, 00:17:57.755 "data_size": 63488 00:17:57.755 }, 00:17:57.755 { 00:17:57.755 "name": "BaseBdev2", 00:17:57.755 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:57.755 "is_configured": true, 00:17:57.755 "data_offset": 2048, 00:17:57.755 "data_size": 63488 00:17:57.755 }, 00:17:57.755 { 00:17:57.755 "name": "BaseBdev3", 00:17:57.755 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:57.755 "is_configured": true, 00:17:57.755 "data_offset": 2048, 00:17:57.755 "data_size": 63488 00:17:57.755 }, 00:17:57.755 { 00:17:57.755 "name": "BaseBdev4", 00:17:57.755 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:57.755 "is_configured": true, 00:17:57.755 "data_offset": 2048, 00:17:57.755 "data_size": 63488 00:17:57.755 } 00:17:57.755 ] 00:17:57.755 }' 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.755 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.324 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.324 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.324 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.324 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.324 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.324 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.324 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.324 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.324 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.324 12:35:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.324 "name": "raid_bdev1", 00:17:58.324 "uuid": "367970b6-834e-462f-890a-8d3d614b174c", 00:17:58.324 "strip_size_kb": 64, 00:17:58.324 "state": "online", 00:17:58.324 "raid_level": "raid5f", 00:17:58.324 "superblock": true, 00:17:58.324 "num_base_bdevs": 4, 00:17:58.324 "num_base_bdevs_discovered": 3, 00:17:58.324 "num_base_bdevs_operational": 3, 00:17:58.324 "base_bdevs_list": [ 00:17:58.324 { 00:17:58.324 "name": null, 00:17:58.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.324 "is_configured": false, 00:17:58.324 "data_offset": 0, 00:17:58.324 "data_size": 63488 00:17:58.324 }, 00:17:58.324 { 00:17:58.324 "name": "BaseBdev2", 00:17:58.324 "uuid": "fde30ab1-11a7-5f79-94c0-eecf97d1db6d", 00:17:58.324 "is_configured": true, 00:17:58.324 "data_offset": 2048, 00:17:58.324 "data_size": 63488 00:17:58.324 }, 00:17:58.324 { 00:17:58.324 "name": "BaseBdev3", 00:17:58.324 "uuid": "d20f5ade-9c0d-53f6-a6c8-31b996992306", 00:17:58.324 "is_configured": true, 00:17:58.324 "data_offset": 2048, 00:17:58.324 "data_size": 63488 00:17:58.324 }, 00:17:58.324 { 00:17:58.324 "name": "BaseBdev4", 00:17:58.324 "uuid": "99058f2c-3bca-5878-b354-7f76e45eca58", 00:17:58.324 "is_configured": true, 00:17:58.324 "data_offset": 2048, 00:17:58.324 "data_size": 63488 00:17:58.324 } 00:17:58.324 ] 00:17:58.324 }' 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84960 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84960 ']' 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 84960 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84960 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:58.324 killing process with pid 84960 00:17:58.324 Received shutdown signal, test time was about 60.000000 seconds 00:17:58.324 00:17:58.324 Latency(us) 00:17:58.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.324 =================================================================================================================== 00:17:58.324 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84960' 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 84960 00:17:58.324 [2024-09-30 12:35:10.166451] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.324 [2024-09-30 12:35:10.166565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.324 12:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 84960 00:17:58.325 [2024-09-30 12:35:10.166634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.325 [2024-09-30 12:35:10.166647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:58.894 [2024-09-30 12:35:10.622777] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.274 12:35:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:00.274 00:18:00.274 real 0m27.225s 00:18:00.274 user 0m34.184s 00:18:00.274 sys 0m3.191s 00:18:00.274 12:35:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.274 12:35:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.274 ************************************ 00:18:00.274 END TEST raid5f_rebuild_test_sb 00:18:00.274 ************************************ 00:18:00.274 12:35:11 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:00.274 12:35:11 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:00.274 12:35:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:00.274 12:35:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.274 12:35:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.274 ************************************ 00:18:00.274 START TEST raid_state_function_test_sb_4k 00:18:00.275 ************************************ 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:00.275 Process raid pid: 85778 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85778 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85778' 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85778 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 85778 ']' 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.275 12:35:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.275 [2024-09-30 12:35:11.982247] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:00.275 [2024-09-30 12:35:11.982429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.275 [2024-09-30 12:35:12.147325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.535 [2024-09-30 12:35:12.351989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.795 [2024-09-30 12:35:12.563520] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.795 [2024-09-30 12:35:12.563561] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.055 [2024-09-30 12:35:12.815366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.055 [2024-09-30 12:35:12.815499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.055 [2024-09-30 12:35:12.815538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.055 [2024-09-30 12:35:12.815577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.055 "name": "Existed_Raid", 00:18:01.055 "uuid": "e027a85d-0bef-46cc-aa35-a2677335ce59", 00:18:01.055 "strip_size_kb": 0, 00:18:01.055 "state": "configuring", 00:18:01.055 "raid_level": "raid1", 00:18:01.055 "superblock": true, 00:18:01.055 "num_base_bdevs": 2, 00:18:01.055 "num_base_bdevs_discovered": 0, 00:18:01.055 "num_base_bdevs_operational": 2, 00:18:01.055 "base_bdevs_list": [ 00:18:01.055 { 00:18:01.055 "name": "BaseBdev1", 00:18:01.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.055 "is_configured": false, 00:18:01.055 "data_offset": 0, 00:18:01.055 "data_size": 0 00:18:01.055 }, 00:18:01.055 { 00:18:01.055 "name": "BaseBdev2", 00:18:01.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.055 "is_configured": false, 00:18:01.055 "data_offset": 0, 00:18:01.055 "data_size": 0 00:18:01.055 } 00:18:01.055 ] 00:18:01.055 }' 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.055 12:35:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.625 [2024-09-30 12:35:13.314412] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:01.625 [2024-09-30 12:35:13.314503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.625 [2024-09-30 12:35:13.326415] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.625 [2024-09-30 12:35:13.326499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.625 [2024-09-30 12:35:13.326540] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.625 [2024-09-30 12:35:13.326564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.625 [2024-09-30 12:35:13.384726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.625 BaseBdev1 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.625 [ 00:18:01.625 { 00:18:01.625 "name": "BaseBdev1", 00:18:01.625 "aliases": [ 00:18:01.625 "09c7f500-3169-4232-be44-d2b64fa66ff7" 00:18:01.625 ], 00:18:01.625 "product_name": "Malloc disk", 00:18:01.625 "block_size": 4096, 00:18:01.625 "num_blocks": 8192, 00:18:01.625 "uuid": "09c7f500-3169-4232-be44-d2b64fa66ff7", 00:18:01.625 "assigned_rate_limits": { 00:18:01.625 "rw_ios_per_sec": 0, 00:18:01.625 "rw_mbytes_per_sec": 0, 00:18:01.625 "r_mbytes_per_sec": 0, 00:18:01.625 "w_mbytes_per_sec": 0 00:18:01.625 }, 00:18:01.625 "claimed": true, 00:18:01.625 "claim_type": "exclusive_write", 00:18:01.625 "zoned": false, 00:18:01.625 "supported_io_types": { 00:18:01.625 "read": true, 00:18:01.625 "write": true, 00:18:01.625 "unmap": true, 00:18:01.625 "flush": true, 00:18:01.625 "reset": true, 00:18:01.625 "nvme_admin": false, 00:18:01.625 "nvme_io": false, 00:18:01.625 "nvme_io_md": false, 00:18:01.625 "write_zeroes": true, 00:18:01.625 "zcopy": true, 00:18:01.625 "get_zone_info": false, 00:18:01.625 "zone_management": false, 00:18:01.625 "zone_append": false, 00:18:01.625 "compare": false, 00:18:01.625 "compare_and_write": false, 00:18:01.625 "abort": true, 00:18:01.625 "seek_hole": false, 00:18:01.625 "seek_data": false, 00:18:01.625 "copy": true, 00:18:01.625 "nvme_iov_md": false 00:18:01.625 }, 00:18:01.625 "memory_domains": [ 00:18:01.625 { 00:18:01.625 "dma_device_id": "system", 00:18:01.625 "dma_device_type": 1 00:18:01.625 }, 00:18:01.625 { 00:18:01.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.625 "dma_device_type": 2 00:18:01.625 } 00:18:01.625 ], 00:18:01.625 "driver_specific": {} 00:18:01.625 } 00:18:01.625 ] 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.625 "name": "Existed_Raid", 00:18:01.625 "uuid": "15a02b97-9236-4eae-92e8-18924cbb492f", 00:18:01.625 "strip_size_kb": 0, 00:18:01.625 "state": "configuring", 00:18:01.625 "raid_level": "raid1", 00:18:01.625 "superblock": true, 00:18:01.625 "num_base_bdevs": 2, 00:18:01.625 "num_base_bdevs_discovered": 1, 00:18:01.625 "num_base_bdevs_operational": 2, 00:18:01.625 "base_bdevs_list": [ 00:18:01.625 { 00:18:01.625 "name": "BaseBdev1", 00:18:01.625 "uuid": "09c7f500-3169-4232-be44-d2b64fa66ff7", 00:18:01.625 "is_configured": true, 00:18:01.625 "data_offset": 256, 00:18:01.625 "data_size": 7936 00:18:01.625 }, 00:18:01.625 { 00:18:01.625 "name": "BaseBdev2", 00:18:01.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.625 "is_configured": false, 00:18:01.625 "data_offset": 0, 00:18:01.625 "data_size": 0 00:18:01.625 } 00:18:01.625 ] 00:18:01.625 }' 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.625 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.196 [2024-09-30 12:35:13.903832] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:02.196 [2024-09-30 12:35:13.903917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.196 [2024-09-30 12:35:13.915890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.196 [2024-09-30 12:35:13.917641] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.196 [2024-09-30 12:35:13.917718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.196 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.196 "name": "Existed_Raid", 00:18:02.196 "uuid": "3f7c3dac-38e9-45e9-9ae2-e2f1417e4055", 00:18:02.196 "strip_size_kb": 0, 00:18:02.196 "state": "configuring", 00:18:02.196 "raid_level": "raid1", 00:18:02.196 "superblock": true, 00:18:02.196 "num_base_bdevs": 2, 00:18:02.196 "num_base_bdevs_discovered": 1, 00:18:02.196 "num_base_bdevs_operational": 2, 00:18:02.196 "base_bdevs_list": [ 00:18:02.196 { 00:18:02.196 "name": "BaseBdev1", 00:18:02.196 "uuid": "09c7f500-3169-4232-be44-d2b64fa66ff7", 00:18:02.196 "is_configured": true, 00:18:02.196 "data_offset": 256, 00:18:02.196 "data_size": 7936 00:18:02.196 }, 00:18:02.196 { 00:18:02.196 "name": "BaseBdev2", 00:18:02.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.196 "is_configured": false, 00:18:02.196 "data_offset": 0, 00:18:02.196 "data_size": 0 00:18:02.196 } 00:18:02.196 ] 00:18:02.197 }' 00:18:02.197 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.197 12:35:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.456 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:02.456 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.716 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.716 [2024-09-30 12:35:14.391310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.716 BaseBdev2 00:18:02.716 [2024-09-30 12:35:14.391614] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:02.717 [2024-09-30 12:35:14.391635] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:02.717 [2024-09-30 12:35:14.391920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:02.717 [2024-09-30 12:35:14.392070] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:02.717 [2024-09-30 12:35:14.392084] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:02.717 [2024-09-30 12:35:14.392221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.717 [ 00:18:02.717 { 00:18:02.717 "name": "BaseBdev2", 00:18:02.717 "aliases": [ 00:18:02.717 "b67fc42c-ab70-460a-b15c-2e100899bf6c" 00:18:02.717 ], 00:18:02.717 "product_name": "Malloc disk", 00:18:02.717 "block_size": 4096, 00:18:02.717 "num_blocks": 8192, 00:18:02.717 "uuid": "b67fc42c-ab70-460a-b15c-2e100899bf6c", 00:18:02.717 "assigned_rate_limits": { 00:18:02.717 "rw_ios_per_sec": 0, 00:18:02.717 "rw_mbytes_per_sec": 0, 00:18:02.717 "r_mbytes_per_sec": 0, 00:18:02.717 "w_mbytes_per_sec": 0 00:18:02.717 }, 00:18:02.717 "claimed": true, 00:18:02.717 "claim_type": "exclusive_write", 00:18:02.717 "zoned": false, 00:18:02.717 "supported_io_types": { 00:18:02.717 "read": true, 00:18:02.717 "write": true, 00:18:02.717 "unmap": true, 00:18:02.717 "flush": true, 00:18:02.717 "reset": true, 00:18:02.717 "nvme_admin": false, 00:18:02.717 "nvme_io": false, 00:18:02.717 "nvme_io_md": false, 00:18:02.717 "write_zeroes": true, 00:18:02.717 "zcopy": true, 00:18:02.717 "get_zone_info": false, 00:18:02.717 "zone_management": false, 00:18:02.717 "zone_append": false, 00:18:02.717 "compare": false, 00:18:02.717 "compare_and_write": false, 00:18:02.717 "abort": true, 00:18:02.717 "seek_hole": false, 00:18:02.717 "seek_data": false, 00:18:02.717 "copy": true, 00:18:02.717 "nvme_iov_md": false 00:18:02.717 }, 00:18:02.717 "memory_domains": [ 00:18:02.717 { 00:18:02.717 "dma_device_id": "system", 00:18:02.717 "dma_device_type": 1 00:18:02.717 }, 00:18:02.717 { 00:18:02.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.717 "dma_device_type": 2 00:18:02.717 } 00:18:02.717 ], 00:18:02.717 "driver_specific": {} 00:18:02.717 } 00:18:02.717 ] 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.717 "name": "Existed_Raid", 00:18:02.717 "uuid": "3f7c3dac-38e9-45e9-9ae2-e2f1417e4055", 00:18:02.717 "strip_size_kb": 0, 00:18:02.717 "state": "online", 00:18:02.717 "raid_level": "raid1", 00:18:02.717 "superblock": true, 00:18:02.717 "num_base_bdevs": 2, 00:18:02.717 "num_base_bdevs_discovered": 2, 00:18:02.717 "num_base_bdevs_operational": 2, 00:18:02.717 "base_bdevs_list": [ 00:18:02.717 { 00:18:02.717 "name": "BaseBdev1", 00:18:02.717 "uuid": "09c7f500-3169-4232-be44-d2b64fa66ff7", 00:18:02.717 "is_configured": true, 00:18:02.717 "data_offset": 256, 00:18:02.717 "data_size": 7936 00:18:02.717 }, 00:18:02.717 { 00:18:02.717 "name": "BaseBdev2", 00:18:02.717 "uuid": "b67fc42c-ab70-460a-b15c-2e100899bf6c", 00:18:02.717 "is_configured": true, 00:18:02.717 "data_offset": 256, 00:18:02.717 "data_size": 7936 00:18:02.717 } 00:18:02.717 ] 00:18:02.717 }' 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.717 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.977 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:02.977 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:02.977 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:02.978 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:02.978 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:02.978 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:02.978 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:02.978 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.978 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.978 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:02.978 [2024-09-30 12:35:14.854810] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.978 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.238 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:03.238 "name": "Existed_Raid", 00:18:03.238 "aliases": [ 00:18:03.238 "3f7c3dac-38e9-45e9-9ae2-e2f1417e4055" 00:18:03.238 ], 00:18:03.238 "product_name": "Raid Volume", 00:18:03.238 "block_size": 4096, 00:18:03.238 "num_blocks": 7936, 00:18:03.238 "uuid": "3f7c3dac-38e9-45e9-9ae2-e2f1417e4055", 00:18:03.238 "assigned_rate_limits": { 00:18:03.238 "rw_ios_per_sec": 0, 00:18:03.238 "rw_mbytes_per_sec": 0, 00:18:03.238 "r_mbytes_per_sec": 0, 00:18:03.238 "w_mbytes_per_sec": 0 00:18:03.238 }, 00:18:03.238 "claimed": false, 00:18:03.238 "zoned": false, 00:18:03.238 "supported_io_types": { 00:18:03.238 "read": true, 00:18:03.238 "write": true, 00:18:03.238 "unmap": false, 00:18:03.238 "flush": false, 00:18:03.238 "reset": true, 00:18:03.238 "nvme_admin": false, 00:18:03.238 "nvme_io": false, 00:18:03.238 "nvme_io_md": false, 00:18:03.238 "write_zeroes": true, 00:18:03.238 "zcopy": false, 00:18:03.238 "get_zone_info": false, 00:18:03.238 "zone_management": false, 00:18:03.238 "zone_append": false, 00:18:03.238 "compare": false, 00:18:03.238 "compare_and_write": false, 00:18:03.238 "abort": false, 00:18:03.238 "seek_hole": false, 00:18:03.238 "seek_data": false, 00:18:03.239 "copy": false, 00:18:03.239 "nvme_iov_md": false 00:18:03.239 }, 00:18:03.239 "memory_domains": [ 00:18:03.239 { 00:18:03.239 "dma_device_id": "system", 00:18:03.239 "dma_device_type": 1 00:18:03.239 }, 00:18:03.239 { 00:18:03.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.239 "dma_device_type": 2 00:18:03.239 }, 00:18:03.239 { 00:18:03.239 "dma_device_id": "system", 00:18:03.239 "dma_device_type": 1 00:18:03.239 }, 00:18:03.239 { 00:18:03.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.239 "dma_device_type": 2 00:18:03.239 } 00:18:03.239 ], 00:18:03.239 "driver_specific": { 00:18:03.239 "raid": { 00:18:03.239 "uuid": "3f7c3dac-38e9-45e9-9ae2-e2f1417e4055", 00:18:03.239 "strip_size_kb": 0, 00:18:03.239 "state": "online", 00:18:03.239 "raid_level": "raid1", 00:18:03.239 "superblock": true, 00:18:03.239 "num_base_bdevs": 2, 00:18:03.239 "num_base_bdevs_discovered": 2, 00:18:03.239 "num_base_bdevs_operational": 2, 00:18:03.239 "base_bdevs_list": [ 00:18:03.239 { 00:18:03.239 "name": "BaseBdev1", 00:18:03.239 "uuid": "09c7f500-3169-4232-be44-d2b64fa66ff7", 00:18:03.239 "is_configured": true, 00:18:03.239 "data_offset": 256, 00:18:03.239 "data_size": 7936 00:18:03.239 }, 00:18:03.239 { 00:18:03.239 "name": "BaseBdev2", 00:18:03.239 "uuid": "b67fc42c-ab70-460a-b15c-2e100899bf6c", 00:18:03.239 "is_configured": true, 00:18:03.239 "data_offset": 256, 00:18:03.239 "data_size": 7936 00:18:03.239 } 00:18:03.239 ] 00:18:03.239 } 00:18:03.239 } 00:18:03.239 }' 00:18:03.239 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:03.239 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:03.239 BaseBdev2' 00:18:03.239 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.239 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:03.239 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.239 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:03.239 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.239 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.239 12:35:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.239 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.239 [2024-09-30 12:35:15.086202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.499 "name": "Existed_Raid", 00:18:03.499 "uuid": "3f7c3dac-38e9-45e9-9ae2-e2f1417e4055", 00:18:03.499 "strip_size_kb": 0, 00:18:03.499 "state": "online", 00:18:03.499 "raid_level": "raid1", 00:18:03.499 "superblock": true, 00:18:03.499 "num_base_bdevs": 2, 00:18:03.499 "num_base_bdevs_discovered": 1, 00:18:03.499 "num_base_bdevs_operational": 1, 00:18:03.499 "base_bdevs_list": [ 00:18:03.499 { 00:18:03.499 "name": null, 00:18:03.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.499 "is_configured": false, 00:18:03.499 "data_offset": 0, 00:18:03.499 "data_size": 7936 00:18:03.499 }, 00:18:03.499 { 00:18:03.499 "name": "BaseBdev2", 00:18:03.499 "uuid": "b67fc42c-ab70-460a-b15c-2e100899bf6c", 00:18:03.499 "is_configured": true, 00:18:03.499 "data_offset": 256, 00:18:03.499 "data_size": 7936 00:18:03.499 } 00:18:03.499 ] 00:18:03.499 }' 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.499 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.759 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:03.759 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:03.759 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:03.759 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.759 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.759 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.019 [2024-09-30 12:35:15.701491] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:04.019 [2024-09-30 12:35:15.701649] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.019 [2024-09-30 12:35:15.789800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.019 [2024-09-30 12:35:15.789922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.019 [2024-09-30 12:35:15.789963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85778 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 85778 ']' 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 85778 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85778 00:18:04.019 killing process with pid 85778 00:18:04.019 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:04.020 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:04.020 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85778' 00:18:04.020 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 85778 00:18:04.020 [2024-09-30 12:35:15.877887] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.020 12:35:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 85778 00:18:04.020 [2024-09-30 12:35:15.893462] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.402 ************************************ 00:18:05.402 END TEST raid_state_function_test_sb_4k 00:18:05.402 ************************************ 00:18:05.402 12:35:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:05.402 00:18:05.402 real 0m5.200s 00:18:05.402 user 0m7.445s 00:18:05.402 sys 0m0.912s 00:18:05.402 12:35:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.402 12:35:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.402 12:35:17 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:05.402 12:35:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:05.402 12:35:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.402 12:35:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.402 ************************************ 00:18:05.402 START TEST raid_superblock_test_4k 00:18:05.402 ************************************ 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86030 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86030 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86030 ']' 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.402 12:35:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.403 [2024-09-30 12:35:17.262225] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:05.403 [2024-09-30 12:35:17.262469] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86030 ] 00:18:05.663 [2024-09-30 12:35:17.431184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.923 [2024-09-30 12:35:17.625276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.183 [2024-09-30 12:35:17.818637] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.183 [2024-09-30 12:35:17.818751] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.444 malloc1 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.444 [2024-09-30 12:35:18.133045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:06.444 [2024-09-30 12:35:18.133185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.444 [2024-09-30 12:35:18.133224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:06.444 [2024-09-30 12:35:18.133254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.444 [2024-09-30 12:35:18.135153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.444 [2024-09-30 12:35:18.135223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:06.444 pt1 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.444 malloc2 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.444 [2024-09-30 12:35:18.217679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:06.444 [2024-09-30 12:35:18.217799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.444 [2024-09-30 12:35:18.217839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:06.444 [2024-09-30 12:35:18.217864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.444 [2024-09-30 12:35:18.219830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.444 [2024-09-30 12:35:18.219901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:06.444 pt2 00:18:06.444 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.445 [2024-09-30 12:35:18.229721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:06.445 [2024-09-30 12:35:18.231402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:06.445 [2024-09-30 12:35:18.231612] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:06.445 [2024-09-30 12:35:18.231658] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.445 [2024-09-30 12:35:18.231923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:06.445 [2024-09-30 12:35:18.232110] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:06.445 [2024-09-30 12:35:18.232153] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:06.445 [2024-09-30 12:35:18.232329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.445 "name": "raid_bdev1", 00:18:06.445 "uuid": "57b0248f-3f57-43b9-8740-163022626876", 00:18:06.445 "strip_size_kb": 0, 00:18:06.445 "state": "online", 00:18:06.445 "raid_level": "raid1", 00:18:06.445 "superblock": true, 00:18:06.445 "num_base_bdevs": 2, 00:18:06.445 "num_base_bdevs_discovered": 2, 00:18:06.445 "num_base_bdevs_operational": 2, 00:18:06.445 "base_bdevs_list": [ 00:18:06.445 { 00:18:06.445 "name": "pt1", 00:18:06.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:06.445 "is_configured": true, 00:18:06.445 "data_offset": 256, 00:18:06.445 "data_size": 7936 00:18:06.445 }, 00:18:06.445 { 00:18:06.445 "name": "pt2", 00:18:06.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.445 "is_configured": true, 00:18:06.445 "data_offset": 256, 00:18:06.445 "data_size": 7936 00:18:06.445 } 00:18:06.445 ] 00:18:06.445 }' 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.445 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:07.015 [2024-09-30 12:35:18.737067] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:07.015 "name": "raid_bdev1", 00:18:07.015 "aliases": [ 00:18:07.015 "57b0248f-3f57-43b9-8740-163022626876" 00:18:07.015 ], 00:18:07.015 "product_name": "Raid Volume", 00:18:07.015 "block_size": 4096, 00:18:07.015 "num_blocks": 7936, 00:18:07.015 "uuid": "57b0248f-3f57-43b9-8740-163022626876", 00:18:07.015 "assigned_rate_limits": { 00:18:07.015 "rw_ios_per_sec": 0, 00:18:07.015 "rw_mbytes_per_sec": 0, 00:18:07.015 "r_mbytes_per_sec": 0, 00:18:07.015 "w_mbytes_per_sec": 0 00:18:07.015 }, 00:18:07.015 "claimed": false, 00:18:07.015 "zoned": false, 00:18:07.015 "supported_io_types": { 00:18:07.015 "read": true, 00:18:07.015 "write": true, 00:18:07.015 "unmap": false, 00:18:07.015 "flush": false, 00:18:07.015 "reset": true, 00:18:07.015 "nvme_admin": false, 00:18:07.015 "nvme_io": false, 00:18:07.015 "nvme_io_md": false, 00:18:07.015 "write_zeroes": true, 00:18:07.015 "zcopy": false, 00:18:07.015 "get_zone_info": false, 00:18:07.015 "zone_management": false, 00:18:07.015 "zone_append": false, 00:18:07.015 "compare": false, 00:18:07.015 "compare_and_write": false, 00:18:07.015 "abort": false, 00:18:07.015 "seek_hole": false, 00:18:07.015 "seek_data": false, 00:18:07.015 "copy": false, 00:18:07.015 "nvme_iov_md": false 00:18:07.015 }, 00:18:07.015 "memory_domains": [ 00:18:07.015 { 00:18:07.015 "dma_device_id": "system", 00:18:07.015 "dma_device_type": 1 00:18:07.015 }, 00:18:07.015 { 00:18:07.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.015 "dma_device_type": 2 00:18:07.015 }, 00:18:07.015 { 00:18:07.015 "dma_device_id": "system", 00:18:07.015 "dma_device_type": 1 00:18:07.015 }, 00:18:07.015 { 00:18:07.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.015 "dma_device_type": 2 00:18:07.015 } 00:18:07.015 ], 00:18:07.015 "driver_specific": { 00:18:07.015 "raid": { 00:18:07.015 "uuid": "57b0248f-3f57-43b9-8740-163022626876", 00:18:07.015 "strip_size_kb": 0, 00:18:07.015 "state": "online", 00:18:07.015 "raid_level": "raid1", 00:18:07.015 "superblock": true, 00:18:07.015 "num_base_bdevs": 2, 00:18:07.015 "num_base_bdevs_discovered": 2, 00:18:07.015 "num_base_bdevs_operational": 2, 00:18:07.015 "base_bdevs_list": [ 00:18:07.015 { 00:18:07.015 "name": "pt1", 00:18:07.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.015 "is_configured": true, 00:18:07.015 "data_offset": 256, 00:18:07.015 "data_size": 7936 00:18:07.015 }, 00:18:07.015 { 00:18:07.015 "name": "pt2", 00:18:07.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.015 "is_configured": true, 00:18:07.015 "data_offset": 256, 00:18:07.015 "data_size": 7936 00:18:07.015 } 00:18:07.015 ] 00:18:07.015 } 00:18:07.015 } 00:18:07.015 }' 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:07.015 pt2' 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.015 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.285 12:35:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.285 [2024-09-30 12:35:18.988614] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=57b0248f-3f57-43b9-8740-163022626876 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 57b0248f-3f57-43b9-8740-163022626876 ']' 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.285 [2024-09-30 12:35:19.036325] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.285 [2024-09-30 12:35:19.036389] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.285 [2024-09-30 12:35:19.036478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.285 [2024-09-30 12:35:19.036538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.285 [2024-09-30 12:35:19.036570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.285 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.285 [2024-09-30 12:35:19.172088] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:07.286 [2024-09-30 12:35:19.173835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:07.286 [2024-09-30 12:35:19.173928] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:07.286 [2024-09-30 12:35:19.174012] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:07.286 [2024-09-30 12:35:19.174061] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.286 [2024-09-30 12:35:19.174096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:07.547 request: 00:18:07.547 { 00:18:07.547 "name": "raid_bdev1", 00:18:07.547 "raid_level": "raid1", 00:18:07.547 "base_bdevs": [ 00:18:07.547 "malloc1", 00:18:07.547 "malloc2" 00:18:07.547 ], 00:18:07.547 "superblock": false, 00:18:07.547 "method": "bdev_raid_create", 00:18:07.547 "req_id": 1 00:18:07.547 } 00:18:07.547 Got JSON-RPC error response 00:18:07.547 response: 00:18:07.547 { 00:18:07.547 "code": -17, 00:18:07.547 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:07.547 } 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.547 [2024-09-30 12:35:19.239955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:07.547 [2024-09-30 12:35:19.240047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.547 [2024-09-30 12:35:19.240078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:07.547 [2024-09-30 12:35:19.240111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.547 [2024-09-30 12:35:19.242088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.547 [2024-09-30 12:35:19.242159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:07.547 [2024-09-30 12:35:19.242234] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:07.547 [2024-09-30 12:35:19.242308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:07.547 pt1 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.547 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.547 "name": "raid_bdev1", 00:18:07.547 "uuid": "57b0248f-3f57-43b9-8740-163022626876", 00:18:07.547 "strip_size_kb": 0, 00:18:07.547 "state": "configuring", 00:18:07.547 "raid_level": "raid1", 00:18:07.547 "superblock": true, 00:18:07.547 "num_base_bdevs": 2, 00:18:07.547 "num_base_bdevs_discovered": 1, 00:18:07.547 "num_base_bdevs_operational": 2, 00:18:07.547 "base_bdevs_list": [ 00:18:07.547 { 00:18:07.547 "name": "pt1", 00:18:07.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.547 "is_configured": true, 00:18:07.547 "data_offset": 256, 00:18:07.547 "data_size": 7936 00:18:07.547 }, 00:18:07.547 { 00:18:07.547 "name": null, 00:18:07.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.548 "is_configured": false, 00:18:07.548 "data_offset": 256, 00:18:07.548 "data_size": 7936 00:18:07.548 } 00:18:07.548 ] 00:18:07.548 }' 00:18:07.548 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.548 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.117 [2024-09-30 12:35:19.743088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:08.117 [2024-09-30 12:35:19.743181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.117 [2024-09-30 12:35:19.743231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:08.117 [2024-09-30 12:35:19.743263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.117 [2024-09-30 12:35:19.743639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.117 [2024-09-30 12:35:19.743660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:08.117 [2024-09-30 12:35:19.743712] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:08.117 [2024-09-30 12:35:19.743731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:08.117 [2024-09-30 12:35:19.743868] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:08.117 [2024-09-30 12:35:19.743880] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:08.117 [2024-09-30 12:35:19.744093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:08.117 [2024-09-30 12:35:19.744235] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:08.117 [2024-09-30 12:35:19.744244] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:08.117 [2024-09-30 12:35:19.744372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.117 pt2 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.117 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.118 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.118 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.118 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.118 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.118 "name": "raid_bdev1", 00:18:08.118 "uuid": "57b0248f-3f57-43b9-8740-163022626876", 00:18:08.118 "strip_size_kb": 0, 00:18:08.118 "state": "online", 00:18:08.118 "raid_level": "raid1", 00:18:08.118 "superblock": true, 00:18:08.118 "num_base_bdevs": 2, 00:18:08.118 "num_base_bdevs_discovered": 2, 00:18:08.118 "num_base_bdevs_operational": 2, 00:18:08.118 "base_bdevs_list": [ 00:18:08.118 { 00:18:08.118 "name": "pt1", 00:18:08.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.118 "is_configured": true, 00:18:08.118 "data_offset": 256, 00:18:08.118 "data_size": 7936 00:18:08.118 }, 00:18:08.118 { 00:18:08.118 "name": "pt2", 00:18:08.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.118 "is_configured": true, 00:18:08.118 "data_offset": 256, 00:18:08.118 "data_size": 7936 00:18:08.118 } 00:18:08.118 ] 00:18:08.118 }' 00:18:08.118 12:35:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.118 12:35:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.377 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:08.377 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:08.377 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:08.377 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:08.377 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:08.377 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:08.377 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:08.377 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.377 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.377 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.377 [2024-09-30 12:35:20.262435] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.638 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.638 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:08.638 "name": "raid_bdev1", 00:18:08.638 "aliases": [ 00:18:08.638 "57b0248f-3f57-43b9-8740-163022626876" 00:18:08.638 ], 00:18:08.638 "product_name": "Raid Volume", 00:18:08.638 "block_size": 4096, 00:18:08.638 "num_blocks": 7936, 00:18:08.638 "uuid": "57b0248f-3f57-43b9-8740-163022626876", 00:18:08.638 "assigned_rate_limits": { 00:18:08.638 "rw_ios_per_sec": 0, 00:18:08.638 "rw_mbytes_per_sec": 0, 00:18:08.638 "r_mbytes_per_sec": 0, 00:18:08.638 "w_mbytes_per_sec": 0 00:18:08.638 }, 00:18:08.638 "claimed": false, 00:18:08.638 "zoned": false, 00:18:08.638 "supported_io_types": { 00:18:08.638 "read": true, 00:18:08.638 "write": true, 00:18:08.638 "unmap": false, 00:18:08.638 "flush": false, 00:18:08.638 "reset": true, 00:18:08.638 "nvme_admin": false, 00:18:08.638 "nvme_io": false, 00:18:08.638 "nvme_io_md": false, 00:18:08.639 "write_zeroes": true, 00:18:08.639 "zcopy": false, 00:18:08.639 "get_zone_info": false, 00:18:08.639 "zone_management": false, 00:18:08.639 "zone_append": false, 00:18:08.639 "compare": false, 00:18:08.639 "compare_and_write": false, 00:18:08.639 "abort": false, 00:18:08.639 "seek_hole": false, 00:18:08.639 "seek_data": false, 00:18:08.639 "copy": false, 00:18:08.639 "nvme_iov_md": false 00:18:08.639 }, 00:18:08.639 "memory_domains": [ 00:18:08.639 { 00:18:08.639 "dma_device_id": "system", 00:18:08.639 "dma_device_type": 1 00:18:08.639 }, 00:18:08.639 { 00:18:08.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.639 "dma_device_type": 2 00:18:08.639 }, 00:18:08.639 { 00:18:08.639 "dma_device_id": "system", 00:18:08.639 "dma_device_type": 1 00:18:08.639 }, 00:18:08.639 { 00:18:08.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.639 "dma_device_type": 2 00:18:08.639 } 00:18:08.639 ], 00:18:08.639 "driver_specific": { 00:18:08.639 "raid": { 00:18:08.639 "uuid": "57b0248f-3f57-43b9-8740-163022626876", 00:18:08.639 "strip_size_kb": 0, 00:18:08.639 "state": "online", 00:18:08.639 "raid_level": "raid1", 00:18:08.639 "superblock": true, 00:18:08.639 "num_base_bdevs": 2, 00:18:08.639 "num_base_bdevs_discovered": 2, 00:18:08.639 "num_base_bdevs_operational": 2, 00:18:08.639 "base_bdevs_list": [ 00:18:08.639 { 00:18:08.639 "name": "pt1", 00:18:08.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.639 "is_configured": true, 00:18:08.639 "data_offset": 256, 00:18:08.639 "data_size": 7936 00:18:08.639 }, 00:18:08.639 { 00:18:08.639 "name": "pt2", 00:18:08.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.639 "is_configured": true, 00:18:08.639 "data_offset": 256, 00:18:08.639 "data_size": 7936 00:18:08.639 } 00:18:08.639 ] 00:18:08.639 } 00:18:08.639 } 00:18:08.639 }' 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:08.639 pt2' 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.639 [2024-09-30 12:35:20.486045] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 57b0248f-3f57-43b9-8740-163022626876 '!=' 57b0248f-3f57-43b9-8740-163022626876 ']' 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.639 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.639 [2024-09-30 12:35:20.529840] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.902 "name": "raid_bdev1", 00:18:08.902 "uuid": "57b0248f-3f57-43b9-8740-163022626876", 00:18:08.902 "strip_size_kb": 0, 00:18:08.902 "state": "online", 00:18:08.902 "raid_level": "raid1", 00:18:08.902 "superblock": true, 00:18:08.902 "num_base_bdevs": 2, 00:18:08.902 "num_base_bdevs_discovered": 1, 00:18:08.902 "num_base_bdevs_operational": 1, 00:18:08.902 "base_bdevs_list": [ 00:18:08.902 { 00:18:08.902 "name": null, 00:18:08.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.902 "is_configured": false, 00:18:08.902 "data_offset": 0, 00:18:08.902 "data_size": 7936 00:18:08.902 }, 00:18:08.902 { 00:18:08.902 "name": "pt2", 00:18:08.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.902 "is_configured": true, 00:18:08.902 "data_offset": 256, 00:18:08.902 "data_size": 7936 00:18:08.902 } 00:18:08.902 ] 00:18:08.902 }' 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.902 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.170 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:09.170 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.170 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.170 [2024-09-30 12:35:20.965068] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.170 [2024-09-30 12:35:20.965132] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.170 [2024-09-30 12:35:20.965200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.170 [2024-09-30 12:35:20.965264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.170 [2024-09-30 12:35:20.965317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:09.170 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.170 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.170 12:35:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:09.170 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.170 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.170 12:35:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.170 [2024-09-30 12:35:21.036948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:09.170 [2024-09-30 12:35:21.037051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.170 [2024-09-30 12:35:21.037080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:09.170 [2024-09-30 12:35:21.037146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.170 [2024-09-30 12:35:21.039138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.170 [2024-09-30 12:35:21.039178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:09.170 [2024-09-30 12:35:21.039240] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:09.170 [2024-09-30 12:35:21.039279] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.170 [2024-09-30 12:35:21.039374] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:09.170 [2024-09-30 12:35:21.039385] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:09.170 [2024-09-30 12:35:21.039616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:09.170 [2024-09-30 12:35:21.039770] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:09.170 [2024-09-30 12:35:21.039785] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:09.170 [2024-09-30 12:35:21.039907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.170 pt2 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.170 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.171 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.171 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.171 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.171 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.171 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.171 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.171 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.442 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.442 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.442 "name": "raid_bdev1", 00:18:09.442 "uuid": "57b0248f-3f57-43b9-8740-163022626876", 00:18:09.442 "strip_size_kb": 0, 00:18:09.442 "state": "online", 00:18:09.442 "raid_level": "raid1", 00:18:09.442 "superblock": true, 00:18:09.442 "num_base_bdevs": 2, 00:18:09.442 "num_base_bdevs_discovered": 1, 00:18:09.442 "num_base_bdevs_operational": 1, 00:18:09.442 "base_bdevs_list": [ 00:18:09.442 { 00:18:09.442 "name": null, 00:18:09.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.442 "is_configured": false, 00:18:09.442 "data_offset": 256, 00:18:09.442 "data_size": 7936 00:18:09.442 }, 00:18:09.442 { 00:18:09.442 "name": "pt2", 00:18:09.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.442 "is_configured": true, 00:18:09.442 "data_offset": 256, 00:18:09.442 "data_size": 7936 00:18:09.442 } 00:18:09.442 ] 00:18:09.442 }' 00:18:09.442 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.442 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.703 [2024-09-30 12:35:21.512111] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.703 [2024-09-30 12:35:21.512185] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.703 [2024-09-30 12:35:21.512250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.703 [2024-09-30 12:35:21.512301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.703 [2024-09-30 12:35:21.512331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.703 [2024-09-30 12:35:21.576019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:09.703 [2024-09-30 12:35:21.576109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.703 [2024-09-30 12:35:21.576141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:09.703 [2024-09-30 12:35:21.576167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.703 [2024-09-30 12:35:21.578170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.703 [2024-09-30 12:35:21.578239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:09.703 [2024-09-30 12:35:21.578319] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:09.703 [2024-09-30 12:35:21.578378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:09.703 [2024-09-30 12:35:21.578506] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:09.703 [2024-09-30 12:35:21.578557] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.703 [2024-09-30 12:35:21.578594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:09.703 [2024-09-30 12:35:21.578684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.703 [2024-09-30 12:35:21.578792] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:09.703 [2024-09-30 12:35:21.578828] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:09.703 [2024-09-30 12:35:21.579043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:09.703 [2024-09-30 12:35:21.579200] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:09.703 [2024-09-30 12:35:21.579240] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:09.703 [2024-09-30 12:35:21.579400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.703 pt1 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.703 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.963 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.963 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.963 "name": "raid_bdev1", 00:18:09.963 "uuid": "57b0248f-3f57-43b9-8740-163022626876", 00:18:09.963 "strip_size_kb": 0, 00:18:09.963 "state": "online", 00:18:09.963 "raid_level": "raid1", 00:18:09.963 "superblock": true, 00:18:09.963 "num_base_bdevs": 2, 00:18:09.963 "num_base_bdevs_discovered": 1, 00:18:09.963 "num_base_bdevs_operational": 1, 00:18:09.963 "base_bdevs_list": [ 00:18:09.963 { 00:18:09.963 "name": null, 00:18:09.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.963 "is_configured": false, 00:18:09.963 "data_offset": 256, 00:18:09.963 "data_size": 7936 00:18:09.963 }, 00:18:09.963 { 00:18:09.963 "name": "pt2", 00:18:09.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.963 "is_configured": true, 00:18:09.963 "data_offset": 256, 00:18:09.963 "data_size": 7936 00:18:09.963 } 00:18:09.963 ] 00:18:09.963 }' 00:18:09.963 12:35:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.963 12:35:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 12:35:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:10.222 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 12:35:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:10.222 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 12:35:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:10.222 12:35:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:10.222 12:35:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:10.222 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 [2024-09-30 12:35:22.087475] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.222 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 57b0248f-3f57-43b9-8740-163022626876 '!=' 57b0248f-3f57-43b9-8740-163022626876 ']' 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86030 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86030 ']' 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86030 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86030 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:10.482 killing process with pid 86030 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86030' 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86030 00:18:10.482 [2024-09-30 12:35:22.167609] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.482 [2024-09-30 12:35:22.167668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.482 [2024-09-30 12:35:22.167699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.482 [2024-09-30 12:35:22.167714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:10.482 12:35:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86030 00:18:10.482 [2024-09-30 12:35:22.361188] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.864 12:35:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:11.864 00:18:11.864 real 0m6.386s 00:18:11.864 user 0m9.668s 00:18:11.864 sys 0m1.168s 00:18:11.864 12:35:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.864 12:35:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.864 ************************************ 00:18:11.864 END TEST raid_superblock_test_4k 00:18:11.864 ************************************ 00:18:11.864 12:35:23 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:11.864 12:35:23 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:11.864 12:35:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:11.864 12:35:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.864 12:35:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.864 ************************************ 00:18:11.864 START TEST raid_rebuild_test_sb_4k 00:18:11.864 ************************************ 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:11.864 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86358 00:18:11.865 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86358 00:18:11.865 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:11.865 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86358 ']' 00:18:11.865 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.865 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.865 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.865 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.865 12:35:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.865 [2024-09-30 12:35:23.731911] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:11.865 [2024-09-30 12:35:23.732085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86358 ] 00:18:11.865 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:11.865 Zero copy mechanism will not be used. 00:18:12.125 [2024-09-30 12:35:23.898949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.385 [2024-09-30 12:35:24.085506] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.385 [2024-09-30 12:35:24.263511] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.385 [2024-09-30 12:35:24.263626] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.956 BaseBdev1_malloc 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.956 [2024-09-30 12:35:24.590265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:12.956 [2024-09-30 12:35:24.590337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.956 [2024-09-30 12:35:24.590359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:12.956 [2024-09-30 12:35:24.590371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.956 [2024-09-30 12:35:24.592357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.956 [2024-09-30 12:35:24.592399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:12.956 BaseBdev1 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.956 BaseBdev2_malloc 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.956 [2024-09-30 12:35:24.677922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:12.956 [2024-09-30 12:35:24.678000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.956 [2024-09-30 12:35:24.678019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:12.956 [2024-09-30 12:35:24.678031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.956 [2024-09-30 12:35:24.680030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.956 [2024-09-30 12:35:24.680071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:12.956 BaseBdev2 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.956 spare_malloc 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.956 spare_delay 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.956 [2024-09-30 12:35:24.745253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.956 [2024-09-30 12:35:24.745371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.956 [2024-09-30 12:35:24.745407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:12.956 [2024-09-30 12:35:24.745439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.956 [2024-09-30 12:35:24.747350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.956 [2024-09-30 12:35:24.747423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.956 spare 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.956 [2024-09-30 12:35:24.757278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.956 [2024-09-30 12:35:24.758922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.956 [2024-09-30 12:35:24.759130] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:12.956 [2024-09-30 12:35:24.759165] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:12.956 [2024-09-30 12:35:24.759419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:12.956 [2024-09-30 12:35:24.759624] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:12.956 [2024-09-30 12:35:24.759663] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:12.956 [2024-09-30 12:35:24.759836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.956 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.957 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.957 "name": "raid_bdev1", 00:18:12.957 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:12.957 "strip_size_kb": 0, 00:18:12.957 "state": "online", 00:18:12.957 "raid_level": "raid1", 00:18:12.957 "superblock": true, 00:18:12.957 "num_base_bdevs": 2, 00:18:12.957 "num_base_bdevs_discovered": 2, 00:18:12.957 "num_base_bdevs_operational": 2, 00:18:12.957 "base_bdevs_list": [ 00:18:12.957 { 00:18:12.957 "name": "BaseBdev1", 00:18:12.957 "uuid": "54a208d5-1236-576e-b277-882d2fca50b0", 00:18:12.957 "is_configured": true, 00:18:12.957 "data_offset": 256, 00:18:12.957 "data_size": 7936 00:18:12.957 }, 00:18:12.957 { 00:18:12.957 "name": "BaseBdev2", 00:18:12.957 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:12.957 "is_configured": true, 00:18:12.957 "data_offset": 256, 00:18:12.957 "data_size": 7936 00:18:12.957 } 00:18:12.957 ] 00:18:12.957 }' 00:18:12.957 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.957 12:35:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.526 [2024-09-30 12:35:25.220758] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.526 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:13.786 [2024-09-30 12:35:25.496062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:13.786 /dev/nbd0 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.786 1+0 records in 00:18:13.786 1+0 records out 00:18:13.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577002 s, 7.1 MB/s 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:13.786 12:35:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:14.355 7936+0 records in 00:18:14.355 7936+0 records out 00:18:14.355 32505856 bytes (33 MB, 31 MiB) copied, 0.628208 s, 51.7 MB/s 00:18:14.355 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:14.355 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:14.355 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:14.355 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:14.355 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:14.355 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.355 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:14.614 [2024-09-30 12:35:26.425819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.614 [2024-09-30 12:35:26.449861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.614 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.615 "name": "raid_bdev1", 00:18:14.615 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:14.615 "strip_size_kb": 0, 00:18:14.615 "state": "online", 00:18:14.615 "raid_level": "raid1", 00:18:14.615 "superblock": true, 00:18:14.615 "num_base_bdevs": 2, 00:18:14.615 "num_base_bdevs_discovered": 1, 00:18:14.615 "num_base_bdevs_operational": 1, 00:18:14.615 "base_bdevs_list": [ 00:18:14.615 { 00:18:14.615 "name": null, 00:18:14.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.615 "is_configured": false, 00:18:14.615 "data_offset": 0, 00:18:14.615 "data_size": 7936 00:18:14.615 }, 00:18:14.615 { 00:18:14.615 "name": "BaseBdev2", 00:18:14.615 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:14.615 "is_configured": true, 00:18:14.615 "data_offset": 256, 00:18:14.615 "data_size": 7936 00:18:14.615 } 00:18:14.615 ] 00:18:14.615 }' 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.615 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.184 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:15.184 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.184 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.184 [2024-09-30 12:35:26.945063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.184 [2024-09-30 12:35:26.959121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:15.184 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.184 12:35:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:15.184 [2024-09-30 12:35:26.960926] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.124 12:35:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.124 12:35:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.124 12:35:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.124 12:35:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.124 12:35:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.124 12:35:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.124 12:35:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.124 12:35:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.124 12:35:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.124 12:35:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.124 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.124 "name": "raid_bdev1", 00:18:16.124 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:16.124 "strip_size_kb": 0, 00:18:16.124 "state": "online", 00:18:16.124 "raid_level": "raid1", 00:18:16.124 "superblock": true, 00:18:16.124 "num_base_bdevs": 2, 00:18:16.124 "num_base_bdevs_discovered": 2, 00:18:16.124 "num_base_bdevs_operational": 2, 00:18:16.124 "process": { 00:18:16.124 "type": "rebuild", 00:18:16.124 "target": "spare", 00:18:16.124 "progress": { 00:18:16.124 "blocks": 2560, 00:18:16.124 "percent": 32 00:18:16.124 } 00:18:16.124 }, 00:18:16.124 "base_bdevs_list": [ 00:18:16.124 { 00:18:16.124 "name": "spare", 00:18:16.124 "uuid": "9d1657c8-ac7f-5e7f-8126-59173b41caaf", 00:18:16.124 "is_configured": true, 00:18:16.124 "data_offset": 256, 00:18:16.124 "data_size": 7936 00:18:16.124 }, 00:18:16.124 { 00:18:16.124 "name": "BaseBdev2", 00:18:16.124 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:16.124 "is_configured": true, 00:18:16.124 "data_offset": 256, 00:18:16.124 "data_size": 7936 00:18:16.124 } 00:18:16.124 ] 00:18:16.124 }' 00:18:16.124 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.385 [2024-09-30 12:35:28.120554] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.385 [2024-09-30 12:35:28.165470] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:16.385 [2024-09-30 12:35:28.165527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.385 [2024-09-30 12:35:28.165541] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.385 [2024-09-30 12:35:28.165551] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.385 "name": "raid_bdev1", 00:18:16.385 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:16.385 "strip_size_kb": 0, 00:18:16.385 "state": "online", 00:18:16.385 "raid_level": "raid1", 00:18:16.385 "superblock": true, 00:18:16.385 "num_base_bdevs": 2, 00:18:16.385 "num_base_bdevs_discovered": 1, 00:18:16.385 "num_base_bdevs_operational": 1, 00:18:16.385 "base_bdevs_list": [ 00:18:16.385 { 00:18:16.385 "name": null, 00:18:16.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.385 "is_configured": false, 00:18:16.385 "data_offset": 0, 00:18:16.385 "data_size": 7936 00:18:16.385 }, 00:18:16.385 { 00:18:16.385 "name": "BaseBdev2", 00:18:16.385 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:16.385 "is_configured": true, 00:18:16.385 "data_offset": 256, 00:18:16.385 "data_size": 7936 00:18:16.385 } 00:18:16.385 ] 00:18:16.385 }' 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.385 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.955 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.956 "name": "raid_bdev1", 00:18:16.956 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:16.956 "strip_size_kb": 0, 00:18:16.956 "state": "online", 00:18:16.956 "raid_level": "raid1", 00:18:16.956 "superblock": true, 00:18:16.956 "num_base_bdevs": 2, 00:18:16.956 "num_base_bdevs_discovered": 1, 00:18:16.956 "num_base_bdevs_operational": 1, 00:18:16.956 "base_bdevs_list": [ 00:18:16.956 { 00:18:16.956 "name": null, 00:18:16.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.956 "is_configured": false, 00:18:16.956 "data_offset": 0, 00:18:16.956 "data_size": 7936 00:18:16.956 }, 00:18:16.956 { 00:18:16.956 "name": "BaseBdev2", 00:18:16.956 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:16.956 "is_configured": true, 00:18:16.956 "data_offset": 256, 00:18:16.956 "data_size": 7936 00:18:16.956 } 00:18:16.956 ] 00:18:16.956 }' 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.956 [2024-09-30 12:35:28.778612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.956 [2024-09-30 12:35:28.792974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.956 12:35:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:16.956 [2024-09-30 12:35:28.794670] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.339 "name": "raid_bdev1", 00:18:18.339 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:18.339 "strip_size_kb": 0, 00:18:18.339 "state": "online", 00:18:18.339 "raid_level": "raid1", 00:18:18.339 "superblock": true, 00:18:18.339 "num_base_bdevs": 2, 00:18:18.339 "num_base_bdevs_discovered": 2, 00:18:18.339 "num_base_bdevs_operational": 2, 00:18:18.339 "process": { 00:18:18.339 "type": "rebuild", 00:18:18.339 "target": "spare", 00:18:18.339 "progress": { 00:18:18.339 "blocks": 2560, 00:18:18.339 "percent": 32 00:18:18.339 } 00:18:18.339 }, 00:18:18.339 "base_bdevs_list": [ 00:18:18.339 { 00:18:18.339 "name": "spare", 00:18:18.339 "uuid": "9d1657c8-ac7f-5e7f-8126-59173b41caaf", 00:18:18.339 "is_configured": true, 00:18:18.339 "data_offset": 256, 00:18:18.339 "data_size": 7936 00:18:18.339 }, 00:18:18.339 { 00:18:18.339 "name": "BaseBdev2", 00:18:18.339 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:18.339 "is_configured": true, 00:18:18.339 "data_offset": 256, 00:18:18.339 "data_size": 7936 00:18:18.339 } 00:18:18.339 ] 00:18:18.339 }' 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:18.339 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=674 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.339 "name": "raid_bdev1", 00:18:18.339 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:18.339 "strip_size_kb": 0, 00:18:18.339 "state": "online", 00:18:18.339 "raid_level": "raid1", 00:18:18.339 "superblock": true, 00:18:18.339 "num_base_bdevs": 2, 00:18:18.339 "num_base_bdevs_discovered": 2, 00:18:18.339 "num_base_bdevs_operational": 2, 00:18:18.339 "process": { 00:18:18.339 "type": "rebuild", 00:18:18.339 "target": "spare", 00:18:18.339 "progress": { 00:18:18.339 "blocks": 2816, 00:18:18.339 "percent": 35 00:18:18.339 } 00:18:18.339 }, 00:18:18.339 "base_bdevs_list": [ 00:18:18.339 { 00:18:18.339 "name": "spare", 00:18:18.339 "uuid": "9d1657c8-ac7f-5e7f-8126-59173b41caaf", 00:18:18.339 "is_configured": true, 00:18:18.339 "data_offset": 256, 00:18:18.339 "data_size": 7936 00:18:18.339 }, 00:18:18.339 { 00:18:18.339 "name": "BaseBdev2", 00:18:18.339 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:18.339 "is_configured": true, 00:18:18.339 "data_offset": 256, 00:18:18.339 "data_size": 7936 00:18:18.339 } 00:18:18.339 ] 00:18:18.339 }' 00:18:18.339 12:35:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.339 12:35:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.339 12:35:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.339 12:35:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.339 12:35:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.280 "name": "raid_bdev1", 00:18:19.280 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:19.280 "strip_size_kb": 0, 00:18:19.280 "state": "online", 00:18:19.280 "raid_level": "raid1", 00:18:19.280 "superblock": true, 00:18:19.280 "num_base_bdevs": 2, 00:18:19.280 "num_base_bdevs_discovered": 2, 00:18:19.280 "num_base_bdevs_operational": 2, 00:18:19.280 "process": { 00:18:19.280 "type": "rebuild", 00:18:19.280 "target": "spare", 00:18:19.280 "progress": { 00:18:19.280 "blocks": 5632, 00:18:19.280 "percent": 70 00:18:19.280 } 00:18:19.280 }, 00:18:19.280 "base_bdevs_list": [ 00:18:19.280 { 00:18:19.280 "name": "spare", 00:18:19.280 "uuid": "9d1657c8-ac7f-5e7f-8126-59173b41caaf", 00:18:19.280 "is_configured": true, 00:18:19.280 "data_offset": 256, 00:18:19.280 "data_size": 7936 00:18:19.280 }, 00:18:19.280 { 00:18:19.280 "name": "BaseBdev2", 00:18:19.280 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:19.280 "is_configured": true, 00:18:19.280 "data_offset": 256, 00:18:19.280 "data_size": 7936 00:18:19.280 } 00:18:19.280 ] 00:18:19.280 }' 00:18:19.280 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.540 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.540 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.540 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.540 12:35:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.110 [2024-09-30 12:35:31.905765] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:20.110 [2024-09-30 12:35:31.905836] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:20.110 [2024-09-30 12:35:31.905934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.370 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.370 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.371 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.371 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.371 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.371 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.371 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.371 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.371 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.371 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.371 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.631 "name": "raid_bdev1", 00:18:20.631 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:20.631 "strip_size_kb": 0, 00:18:20.631 "state": "online", 00:18:20.631 "raid_level": "raid1", 00:18:20.631 "superblock": true, 00:18:20.631 "num_base_bdevs": 2, 00:18:20.631 "num_base_bdevs_discovered": 2, 00:18:20.631 "num_base_bdevs_operational": 2, 00:18:20.631 "base_bdevs_list": [ 00:18:20.631 { 00:18:20.631 "name": "spare", 00:18:20.631 "uuid": "9d1657c8-ac7f-5e7f-8126-59173b41caaf", 00:18:20.631 "is_configured": true, 00:18:20.631 "data_offset": 256, 00:18:20.631 "data_size": 7936 00:18:20.631 }, 00:18:20.631 { 00:18:20.631 "name": "BaseBdev2", 00:18:20.631 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:20.631 "is_configured": true, 00:18:20.631 "data_offset": 256, 00:18:20.631 "data_size": 7936 00:18:20.631 } 00:18:20.631 ] 00:18:20.631 }' 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.631 "name": "raid_bdev1", 00:18:20.631 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:20.631 "strip_size_kb": 0, 00:18:20.631 "state": "online", 00:18:20.631 "raid_level": "raid1", 00:18:20.631 "superblock": true, 00:18:20.631 "num_base_bdevs": 2, 00:18:20.631 "num_base_bdevs_discovered": 2, 00:18:20.631 "num_base_bdevs_operational": 2, 00:18:20.631 "base_bdevs_list": [ 00:18:20.631 { 00:18:20.631 "name": "spare", 00:18:20.631 "uuid": "9d1657c8-ac7f-5e7f-8126-59173b41caaf", 00:18:20.631 "is_configured": true, 00:18:20.631 "data_offset": 256, 00:18:20.631 "data_size": 7936 00:18:20.631 }, 00:18:20.631 { 00:18:20.631 "name": "BaseBdev2", 00:18:20.631 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:20.631 "is_configured": true, 00:18:20.631 "data_offset": 256, 00:18:20.631 "data_size": 7936 00:18:20.631 } 00:18:20.631 ] 00:18:20.631 }' 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.631 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.891 "name": "raid_bdev1", 00:18:20.891 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:20.891 "strip_size_kb": 0, 00:18:20.891 "state": "online", 00:18:20.891 "raid_level": "raid1", 00:18:20.891 "superblock": true, 00:18:20.891 "num_base_bdevs": 2, 00:18:20.891 "num_base_bdevs_discovered": 2, 00:18:20.891 "num_base_bdevs_operational": 2, 00:18:20.891 "base_bdevs_list": [ 00:18:20.891 { 00:18:20.891 "name": "spare", 00:18:20.891 "uuid": "9d1657c8-ac7f-5e7f-8126-59173b41caaf", 00:18:20.891 "is_configured": true, 00:18:20.891 "data_offset": 256, 00:18:20.891 "data_size": 7936 00:18:20.891 }, 00:18:20.891 { 00:18:20.891 "name": "BaseBdev2", 00:18:20.891 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:20.891 "is_configured": true, 00:18:20.891 "data_offset": 256, 00:18:20.891 "data_size": 7936 00:18:20.891 } 00:18:20.891 ] 00:18:20.891 }' 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.891 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.151 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:21.151 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.151 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.151 [2024-09-30 12:35:32.949783] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.151 [2024-09-30 12:35:32.949850] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.151 [2024-09-30 12:35:32.949935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.151 [2024-09-30 12:35:32.950010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.151 [2024-09-30 12:35:32.950042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:21.151 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.151 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.151 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:21.151 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.151 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.151 12:35:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.151 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:21.151 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:21.151 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:21.151 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:21.151 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.151 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:21.151 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:21.151 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:21.151 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:21.151 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:21.152 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:21.152 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.152 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:21.412 /dev/nbd0 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.412 1+0 records in 00:18:21.412 1+0 records out 00:18:21.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403339 s, 10.2 MB/s 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.412 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:21.672 /dev/nbd1 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.672 1+0 records in 00:18:21.672 1+0 records out 00:18:21.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432818 s, 9.5 MB/s 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.672 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:21.931 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:21.931 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.931 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:21.931 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:21.931 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:21.931 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.931 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:22.192 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:22.192 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:22.192 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:22.192 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.192 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.192 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:22.192 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:22.192 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.192 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.192 12:35:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.453 [2024-09-30 12:35:34.147731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:22.453 [2024-09-30 12:35:34.147795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.453 [2024-09-30 12:35:34.147815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:22.453 [2024-09-30 12:35:34.147823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.453 [2024-09-30 12:35:34.149907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.453 [2024-09-30 12:35:34.149992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:22.453 [2024-09-30 12:35:34.150085] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:22.453 [2024-09-30 12:35:34.150140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.453 [2024-09-30 12:35:34.150292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.453 spare 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.453 [2024-09-30 12:35:34.250189] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:22.453 [2024-09-30 12:35:34.250218] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:22.453 [2024-09-30 12:35:34.250462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:22.453 [2024-09-30 12:35:34.250615] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:22.453 [2024-09-30 12:35:34.250624] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:22.453 [2024-09-30 12:35:34.250803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.453 "name": "raid_bdev1", 00:18:22.453 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:22.453 "strip_size_kb": 0, 00:18:22.453 "state": "online", 00:18:22.453 "raid_level": "raid1", 00:18:22.453 "superblock": true, 00:18:22.453 "num_base_bdevs": 2, 00:18:22.453 "num_base_bdevs_discovered": 2, 00:18:22.453 "num_base_bdevs_operational": 2, 00:18:22.453 "base_bdevs_list": [ 00:18:22.453 { 00:18:22.453 "name": "spare", 00:18:22.453 "uuid": "9d1657c8-ac7f-5e7f-8126-59173b41caaf", 00:18:22.453 "is_configured": true, 00:18:22.453 "data_offset": 256, 00:18:22.453 "data_size": 7936 00:18:22.453 }, 00:18:22.453 { 00:18:22.453 "name": "BaseBdev2", 00:18:22.453 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:22.453 "is_configured": true, 00:18:22.453 "data_offset": 256, 00:18:22.453 "data_size": 7936 00:18:22.453 } 00:18:22.453 ] 00:18:22.453 }' 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.453 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.022 "name": "raid_bdev1", 00:18:23.022 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:23.022 "strip_size_kb": 0, 00:18:23.022 "state": "online", 00:18:23.022 "raid_level": "raid1", 00:18:23.022 "superblock": true, 00:18:23.022 "num_base_bdevs": 2, 00:18:23.022 "num_base_bdevs_discovered": 2, 00:18:23.022 "num_base_bdevs_operational": 2, 00:18:23.022 "base_bdevs_list": [ 00:18:23.022 { 00:18:23.022 "name": "spare", 00:18:23.022 "uuid": "9d1657c8-ac7f-5e7f-8126-59173b41caaf", 00:18:23.022 "is_configured": true, 00:18:23.022 "data_offset": 256, 00:18:23.022 "data_size": 7936 00:18:23.022 }, 00:18:23.022 { 00:18:23.022 "name": "BaseBdev2", 00:18:23.022 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:23.022 "is_configured": true, 00:18:23.022 "data_offset": 256, 00:18:23.022 "data_size": 7936 00:18:23.022 } 00:18:23.022 ] 00:18:23.022 }' 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.022 [2024-09-30 12:35:34.858516] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.022 "name": "raid_bdev1", 00:18:23.022 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:23.022 "strip_size_kb": 0, 00:18:23.022 "state": "online", 00:18:23.022 "raid_level": "raid1", 00:18:23.022 "superblock": true, 00:18:23.022 "num_base_bdevs": 2, 00:18:23.022 "num_base_bdevs_discovered": 1, 00:18:23.022 "num_base_bdevs_operational": 1, 00:18:23.022 "base_bdevs_list": [ 00:18:23.022 { 00:18:23.022 "name": null, 00:18:23.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.022 "is_configured": false, 00:18:23.022 "data_offset": 0, 00:18:23.022 "data_size": 7936 00:18:23.022 }, 00:18:23.022 { 00:18:23.022 "name": "BaseBdev2", 00:18:23.022 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:23.022 "is_configured": true, 00:18:23.022 "data_offset": 256, 00:18:23.022 "data_size": 7936 00:18:23.022 } 00:18:23.022 ] 00:18:23.022 }' 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.022 12:35:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.591 12:35:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:23.591 12:35:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.591 12:35:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.591 [2024-09-30 12:35:35.245861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.591 [2024-09-30 12:35:35.246050] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:23.591 [2024-09-30 12:35:35.246123] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:23.591 [2024-09-30 12:35:35.246174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.591 [2024-09-30 12:35:35.260318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:23.591 12:35:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.591 12:35:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:23.591 [2024-09-30 12:35:35.262093] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.530 "name": "raid_bdev1", 00:18:24.530 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:24.530 "strip_size_kb": 0, 00:18:24.530 "state": "online", 00:18:24.530 "raid_level": "raid1", 00:18:24.530 "superblock": true, 00:18:24.530 "num_base_bdevs": 2, 00:18:24.530 "num_base_bdevs_discovered": 2, 00:18:24.530 "num_base_bdevs_operational": 2, 00:18:24.530 "process": { 00:18:24.530 "type": "rebuild", 00:18:24.530 "target": "spare", 00:18:24.530 "progress": { 00:18:24.530 "blocks": 2560, 00:18:24.530 "percent": 32 00:18:24.530 } 00:18:24.530 }, 00:18:24.530 "base_bdevs_list": [ 00:18:24.530 { 00:18:24.530 "name": "spare", 00:18:24.530 "uuid": "9d1657c8-ac7f-5e7f-8126-59173b41caaf", 00:18:24.530 "is_configured": true, 00:18:24.530 "data_offset": 256, 00:18:24.530 "data_size": 7936 00:18:24.530 }, 00:18:24.530 { 00:18:24.530 "name": "BaseBdev2", 00:18:24.530 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:24.530 "is_configured": true, 00:18:24.530 "data_offset": 256, 00:18:24.530 "data_size": 7936 00:18:24.530 } 00:18:24.530 ] 00:18:24.530 }' 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.530 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.530 [2024-09-30 12:35:36.421847] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.789 [2024-09-30 12:35:36.466659] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:24.789 [2024-09-30 12:35:36.466718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.789 [2024-09-30 12:35:36.466732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.789 [2024-09-30 12:35:36.466753] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.789 "name": "raid_bdev1", 00:18:24.789 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:24.789 "strip_size_kb": 0, 00:18:24.789 "state": "online", 00:18:24.789 "raid_level": "raid1", 00:18:24.789 "superblock": true, 00:18:24.789 "num_base_bdevs": 2, 00:18:24.789 "num_base_bdevs_discovered": 1, 00:18:24.789 "num_base_bdevs_operational": 1, 00:18:24.789 "base_bdevs_list": [ 00:18:24.789 { 00:18:24.789 "name": null, 00:18:24.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.789 "is_configured": false, 00:18:24.789 "data_offset": 0, 00:18:24.789 "data_size": 7936 00:18:24.789 }, 00:18:24.789 { 00:18:24.789 "name": "BaseBdev2", 00:18:24.789 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:24.789 "is_configured": true, 00:18:24.789 "data_offset": 256, 00:18:24.789 "data_size": 7936 00:18:24.789 } 00:18:24.789 ] 00:18:24.789 }' 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.789 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.048 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:25.048 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.048 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.048 [2024-09-30 12:35:36.940571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:25.048 [2024-09-30 12:35:36.940666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.048 [2024-09-30 12:35:36.940717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:25.048 [2024-09-30 12:35:36.940755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.048 [2024-09-30 12:35:36.941208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.048 [2024-09-30 12:35:36.941266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:25.048 [2024-09-30 12:35:36.941363] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:25.048 [2024-09-30 12:35:36.941404] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:25.048 [2024-09-30 12:35:36.941442] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:25.048 [2024-09-30 12:35:36.941517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.307 [2024-09-30 12:35:36.955226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:25.307 spare 00:18:25.307 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.307 12:35:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:25.307 [2024-09-30 12:35:36.957027] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:26.245 12:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.245 12:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.245 12:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.245 12:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.245 12:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.245 12:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.245 12:35:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.245 12:35:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.245 12:35:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.245 12:35:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.245 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.245 "name": "raid_bdev1", 00:18:26.245 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:26.245 "strip_size_kb": 0, 00:18:26.245 "state": "online", 00:18:26.245 "raid_level": "raid1", 00:18:26.245 "superblock": true, 00:18:26.245 "num_base_bdevs": 2, 00:18:26.245 "num_base_bdevs_discovered": 2, 00:18:26.245 "num_base_bdevs_operational": 2, 00:18:26.245 "process": { 00:18:26.245 "type": "rebuild", 00:18:26.245 "target": "spare", 00:18:26.245 "progress": { 00:18:26.245 "blocks": 2560, 00:18:26.245 "percent": 32 00:18:26.245 } 00:18:26.245 }, 00:18:26.245 "base_bdevs_list": [ 00:18:26.245 { 00:18:26.245 "name": "spare", 00:18:26.245 "uuid": "9d1657c8-ac7f-5e7f-8126-59173b41caaf", 00:18:26.245 "is_configured": true, 00:18:26.245 "data_offset": 256, 00:18:26.245 "data_size": 7936 00:18:26.245 }, 00:18:26.245 { 00:18:26.245 "name": "BaseBdev2", 00:18:26.245 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:26.245 "is_configured": true, 00:18:26.245 "data_offset": 256, 00:18:26.245 "data_size": 7936 00:18:26.245 } 00:18:26.245 ] 00:18:26.245 }' 00:18:26.245 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.245 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.245 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.245 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.245 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:26.245 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.245 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.245 [2024-09-30 12:35:38.117126] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.505 [2024-09-30 12:35:38.161500] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:26.505 [2024-09-30 12:35:38.161553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.505 [2024-09-30 12:35:38.161570] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.505 [2024-09-30 12:35:38.161577] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.505 "name": "raid_bdev1", 00:18:26.505 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:26.505 "strip_size_kb": 0, 00:18:26.505 "state": "online", 00:18:26.505 "raid_level": "raid1", 00:18:26.505 "superblock": true, 00:18:26.505 "num_base_bdevs": 2, 00:18:26.505 "num_base_bdevs_discovered": 1, 00:18:26.505 "num_base_bdevs_operational": 1, 00:18:26.505 "base_bdevs_list": [ 00:18:26.505 { 00:18:26.505 "name": null, 00:18:26.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.505 "is_configured": false, 00:18:26.505 "data_offset": 0, 00:18:26.505 "data_size": 7936 00:18:26.505 }, 00:18:26.505 { 00:18:26.505 "name": "BaseBdev2", 00:18:26.505 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:26.505 "is_configured": true, 00:18:26.505 "data_offset": 256, 00:18:26.505 "data_size": 7936 00:18:26.505 } 00:18:26.505 ] 00:18:26.505 }' 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.505 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.764 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.764 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.764 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.764 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.764 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.764 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.764 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.764 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.764 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.024 "name": "raid_bdev1", 00:18:27.024 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:27.024 "strip_size_kb": 0, 00:18:27.024 "state": "online", 00:18:27.024 "raid_level": "raid1", 00:18:27.024 "superblock": true, 00:18:27.024 "num_base_bdevs": 2, 00:18:27.024 "num_base_bdevs_discovered": 1, 00:18:27.024 "num_base_bdevs_operational": 1, 00:18:27.024 "base_bdevs_list": [ 00:18:27.024 { 00:18:27.024 "name": null, 00:18:27.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.024 "is_configured": false, 00:18:27.024 "data_offset": 0, 00:18:27.024 "data_size": 7936 00:18:27.024 }, 00:18:27.024 { 00:18:27.024 "name": "BaseBdev2", 00:18:27.024 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:27.024 "is_configured": true, 00:18:27.024 "data_offset": 256, 00:18:27.024 "data_size": 7936 00:18:27.024 } 00:18:27.024 ] 00:18:27.024 }' 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.024 [2024-09-30 12:35:38.790442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:27.024 [2024-09-30 12:35:38.790495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.024 [2024-09-30 12:35:38.790515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:27.024 [2024-09-30 12:35:38.790523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.024 [2024-09-30 12:35:38.790936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.024 [2024-09-30 12:35:38.790953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:27.024 [2024-09-30 12:35:38.791022] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:27.024 [2024-09-30 12:35:38.791034] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:27.024 [2024-09-30 12:35:38.791047] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:27.024 [2024-09-30 12:35:38.791055] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:27.024 BaseBdev1 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.024 12:35:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.971 "name": "raid_bdev1", 00:18:27.971 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:27.971 "strip_size_kb": 0, 00:18:27.971 "state": "online", 00:18:27.971 "raid_level": "raid1", 00:18:27.971 "superblock": true, 00:18:27.971 "num_base_bdevs": 2, 00:18:27.971 "num_base_bdevs_discovered": 1, 00:18:27.971 "num_base_bdevs_operational": 1, 00:18:27.971 "base_bdevs_list": [ 00:18:27.971 { 00:18:27.971 "name": null, 00:18:27.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.971 "is_configured": false, 00:18:27.971 "data_offset": 0, 00:18:27.971 "data_size": 7936 00:18:27.971 }, 00:18:27.971 { 00:18:27.971 "name": "BaseBdev2", 00:18:27.971 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:27.971 "is_configured": true, 00:18:27.971 "data_offset": 256, 00:18:27.971 "data_size": 7936 00:18:27.971 } 00:18:27.971 ] 00:18:27.971 }' 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.971 12:35:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.541 "name": "raid_bdev1", 00:18:28.541 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:28.541 "strip_size_kb": 0, 00:18:28.541 "state": "online", 00:18:28.541 "raid_level": "raid1", 00:18:28.541 "superblock": true, 00:18:28.541 "num_base_bdevs": 2, 00:18:28.541 "num_base_bdevs_discovered": 1, 00:18:28.541 "num_base_bdevs_operational": 1, 00:18:28.541 "base_bdevs_list": [ 00:18:28.541 { 00:18:28.541 "name": null, 00:18:28.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.541 "is_configured": false, 00:18:28.541 "data_offset": 0, 00:18:28.541 "data_size": 7936 00:18:28.541 }, 00:18:28.541 { 00:18:28.541 "name": "BaseBdev2", 00:18:28.541 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:28.541 "is_configured": true, 00:18:28.541 "data_offset": 256, 00:18:28.541 "data_size": 7936 00:18:28.541 } 00:18:28.541 ] 00:18:28.541 }' 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.541 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.541 [2024-09-30 12:35:40.343873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.541 [2024-09-30 12:35:40.344051] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.541 [2024-09-30 12:35:40.344106] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:28.541 request: 00:18:28.541 { 00:18:28.541 "base_bdev": "BaseBdev1", 00:18:28.541 "raid_bdev": "raid_bdev1", 00:18:28.541 "method": "bdev_raid_add_base_bdev", 00:18:28.541 "req_id": 1 00:18:28.541 } 00:18:28.541 Got JSON-RPC error response 00:18:28.541 response: 00:18:28.541 { 00:18:28.541 "code": -22, 00:18:28.541 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:28.541 } 00:18:28.542 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:28.542 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:18:28.542 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:28.542 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:28.542 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:28.542 12:35:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.482 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.741 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.741 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.741 "name": "raid_bdev1", 00:18:29.741 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:29.741 "strip_size_kb": 0, 00:18:29.741 "state": "online", 00:18:29.741 "raid_level": "raid1", 00:18:29.741 "superblock": true, 00:18:29.741 "num_base_bdevs": 2, 00:18:29.741 "num_base_bdevs_discovered": 1, 00:18:29.741 "num_base_bdevs_operational": 1, 00:18:29.741 "base_bdevs_list": [ 00:18:29.741 { 00:18:29.741 "name": null, 00:18:29.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.741 "is_configured": false, 00:18:29.741 "data_offset": 0, 00:18:29.741 "data_size": 7936 00:18:29.741 }, 00:18:29.741 { 00:18:29.741 "name": "BaseBdev2", 00:18:29.742 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:29.742 "is_configured": true, 00:18:29.742 "data_offset": 256, 00:18:29.742 "data_size": 7936 00:18:29.742 } 00:18:29.742 ] 00:18:29.742 }' 00:18:29.742 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.742 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.001 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.001 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.001 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.001 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.001 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.001 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.001 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.001 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.001 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.001 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.001 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.001 "name": "raid_bdev1", 00:18:30.001 "uuid": "6064bb41-af72-4bf8-8936-3882afb933fe", 00:18:30.001 "strip_size_kb": 0, 00:18:30.001 "state": "online", 00:18:30.001 "raid_level": "raid1", 00:18:30.001 "superblock": true, 00:18:30.001 "num_base_bdevs": 2, 00:18:30.001 "num_base_bdevs_discovered": 1, 00:18:30.001 "num_base_bdevs_operational": 1, 00:18:30.001 "base_bdevs_list": [ 00:18:30.001 { 00:18:30.002 "name": null, 00:18:30.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.002 "is_configured": false, 00:18:30.002 "data_offset": 0, 00:18:30.002 "data_size": 7936 00:18:30.002 }, 00:18:30.002 { 00:18:30.002 "name": "BaseBdev2", 00:18:30.002 "uuid": "25dc93a7-b2ab-5ad6-8866-b47b0ea615a0", 00:18:30.002 "is_configured": true, 00:18:30.002 "data_offset": 256, 00:18:30.002 "data_size": 7936 00:18:30.002 } 00:18:30.002 ] 00:18:30.002 }' 00:18:30.002 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.261 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.261 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.261 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.261 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86358 00:18:30.261 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86358 ']' 00:18:30.261 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86358 00:18:30.261 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:18:30.261 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.261 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86358 00:18:30.261 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:30.261 12:35:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:30.261 killing process with pid 86358 00:18:30.261 Received shutdown signal, test time was about 60.000000 seconds 00:18:30.261 00:18:30.261 Latency(us) 00:18:30.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.261 =================================================================================================================== 00:18:30.261 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.261 12:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86358' 00:18:30.261 12:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86358 00:18:30.261 [2024-09-30 12:35:42.002429] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:30.261 [2024-09-30 12:35:42.002538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.262 [2024-09-30 12:35:42.002581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.262 12:35:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86358 00:18:30.262 [2024-09-30 12:35:42.002591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:30.521 [2024-09-30 12:35:42.281608] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:31.904 12:35:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:31.904 00:18:31.904 real 0m19.822s 00:18:31.904 user 0m25.734s 00:18:31.904 sys 0m2.790s 00:18:31.904 ************************************ 00:18:31.904 END TEST raid_rebuild_test_sb_4k 00:18:31.904 ************************************ 00:18:31.904 12:35:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:31.904 12:35:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.904 12:35:43 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:31.904 12:35:43 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:31.904 12:35:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:31.904 12:35:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:31.904 12:35:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.904 ************************************ 00:18:31.904 START TEST raid_state_function_test_sb_md_separate 00:18:31.904 ************************************ 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:31.904 Process raid pid: 87049 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87049 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87049' 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87049 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87049 ']' 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:31.904 12:35:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.904 [2024-09-30 12:35:43.633300] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:31.904 [2024-09-30 12:35:43.633528] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.164 [2024-09-30 12:35:43.804311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.164 [2024-09-30 12:35:44.003945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.425 [2024-09-30 12:35:44.180065] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.425 [2024-09-30 12:35:44.180167] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.685 [2024-09-30 12:35:44.473191] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:32.685 [2024-09-30 12:35:44.473241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:32.685 [2024-09-30 12:35:44.473251] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:32.685 [2024-09-30 12:35:44.473276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.685 "name": "Existed_Raid", 00:18:32.685 "uuid": "cc1f9a39-9c21-4c4a-87dc-e2891dca3031", 00:18:32.685 "strip_size_kb": 0, 00:18:32.685 "state": "configuring", 00:18:32.685 "raid_level": "raid1", 00:18:32.685 "superblock": true, 00:18:32.685 "num_base_bdevs": 2, 00:18:32.685 "num_base_bdevs_discovered": 0, 00:18:32.685 "num_base_bdevs_operational": 2, 00:18:32.685 "base_bdevs_list": [ 00:18:32.685 { 00:18:32.685 "name": "BaseBdev1", 00:18:32.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.685 "is_configured": false, 00:18:32.685 "data_offset": 0, 00:18:32.685 "data_size": 0 00:18:32.685 }, 00:18:32.685 { 00:18:32.685 "name": "BaseBdev2", 00:18:32.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.685 "is_configured": false, 00:18:32.685 "data_offset": 0, 00:18:32.685 "data_size": 0 00:18:32.685 } 00:18:32.685 ] 00:18:32.685 }' 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.685 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.255 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:33.255 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.255 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.255 [2024-09-30 12:35:44.940300] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:33.255 [2024-09-30 12:35:44.940379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:33.255 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.255 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:33.256 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.256 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.256 [2024-09-30 12:35:44.952298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:33.256 [2024-09-30 12:35:44.952368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:33.256 [2024-09-30 12:35:44.952392] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.256 [2024-09-30 12:35:44.952415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.256 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.256 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:33.256 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.256 12:35:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.256 [2024-09-30 12:35:45.035109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.256 BaseBdev1 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.256 [ 00:18:33.256 { 00:18:33.256 "name": "BaseBdev1", 00:18:33.256 "aliases": [ 00:18:33.256 "96036fd5-f84e-4b2f-8dd4-1a5919fe12a6" 00:18:33.256 ], 00:18:33.256 "product_name": "Malloc disk", 00:18:33.256 "block_size": 4096, 00:18:33.256 "num_blocks": 8192, 00:18:33.256 "uuid": "96036fd5-f84e-4b2f-8dd4-1a5919fe12a6", 00:18:33.256 "md_size": 32, 00:18:33.256 "md_interleave": false, 00:18:33.256 "dif_type": 0, 00:18:33.256 "assigned_rate_limits": { 00:18:33.256 "rw_ios_per_sec": 0, 00:18:33.256 "rw_mbytes_per_sec": 0, 00:18:33.256 "r_mbytes_per_sec": 0, 00:18:33.256 "w_mbytes_per_sec": 0 00:18:33.256 }, 00:18:33.256 "claimed": true, 00:18:33.256 "claim_type": "exclusive_write", 00:18:33.256 "zoned": false, 00:18:33.256 "supported_io_types": { 00:18:33.256 "read": true, 00:18:33.256 "write": true, 00:18:33.256 "unmap": true, 00:18:33.256 "flush": true, 00:18:33.256 "reset": true, 00:18:33.256 "nvme_admin": false, 00:18:33.256 "nvme_io": false, 00:18:33.256 "nvme_io_md": false, 00:18:33.256 "write_zeroes": true, 00:18:33.256 "zcopy": true, 00:18:33.256 "get_zone_info": false, 00:18:33.256 "zone_management": false, 00:18:33.256 "zone_append": false, 00:18:33.256 "compare": false, 00:18:33.256 "compare_and_write": false, 00:18:33.256 "abort": true, 00:18:33.256 "seek_hole": false, 00:18:33.256 "seek_data": false, 00:18:33.256 "copy": true, 00:18:33.256 "nvme_iov_md": false 00:18:33.256 }, 00:18:33.256 "memory_domains": [ 00:18:33.256 { 00:18:33.256 "dma_device_id": "system", 00:18:33.256 "dma_device_type": 1 00:18:33.256 }, 00:18:33.256 { 00:18:33.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.256 "dma_device_type": 2 00:18:33.256 } 00:18:33.256 ], 00:18:33.256 "driver_specific": {} 00:18:33.256 } 00:18:33.256 ] 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.256 "name": "Existed_Raid", 00:18:33.256 "uuid": "c31b6faf-264d-4ffa-aa0d-5c7f005fcb80", 00:18:33.256 "strip_size_kb": 0, 00:18:33.256 "state": "configuring", 00:18:33.256 "raid_level": "raid1", 00:18:33.256 "superblock": true, 00:18:33.256 "num_base_bdevs": 2, 00:18:33.256 "num_base_bdevs_discovered": 1, 00:18:33.256 "num_base_bdevs_operational": 2, 00:18:33.256 "base_bdevs_list": [ 00:18:33.256 { 00:18:33.256 "name": "BaseBdev1", 00:18:33.256 "uuid": "96036fd5-f84e-4b2f-8dd4-1a5919fe12a6", 00:18:33.256 "is_configured": true, 00:18:33.256 "data_offset": 256, 00:18:33.256 "data_size": 7936 00:18:33.256 }, 00:18:33.256 { 00:18:33.256 "name": "BaseBdev2", 00:18:33.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.256 "is_configured": false, 00:18:33.256 "data_offset": 0, 00:18:33.256 "data_size": 0 00:18:33.256 } 00:18:33.256 ] 00:18:33.256 }' 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.256 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.826 [2024-09-30 12:35:45.562313] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:33.826 [2024-09-30 12:35:45.562395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.826 [2024-09-30 12:35:45.574360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.826 [2024-09-30 12:35:45.576187] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.826 [2024-09-30 12:35:45.576264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.826 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.826 "name": "Existed_Raid", 00:18:33.826 "uuid": "506c917c-a391-492c-a84c-4572f9eab182", 00:18:33.826 "strip_size_kb": 0, 00:18:33.826 "state": "configuring", 00:18:33.826 "raid_level": "raid1", 00:18:33.826 "superblock": true, 00:18:33.826 "num_base_bdevs": 2, 00:18:33.826 "num_base_bdevs_discovered": 1, 00:18:33.826 "num_base_bdevs_operational": 2, 00:18:33.826 "base_bdevs_list": [ 00:18:33.826 { 00:18:33.826 "name": "BaseBdev1", 00:18:33.826 "uuid": "96036fd5-f84e-4b2f-8dd4-1a5919fe12a6", 00:18:33.826 "is_configured": true, 00:18:33.826 "data_offset": 256, 00:18:33.826 "data_size": 7936 00:18:33.827 }, 00:18:33.827 { 00:18:33.827 "name": "BaseBdev2", 00:18:33.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.827 "is_configured": false, 00:18:33.827 "data_offset": 0, 00:18:33.827 "data_size": 0 00:18:33.827 } 00:18:33.827 ] 00:18:33.827 }' 00:18:33.827 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.827 12:35:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.397 [2024-09-30 12:35:46.083035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.397 [2024-09-30 12:35:46.083308] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:34.397 [2024-09-30 12:35:46.083344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:34.397 [2024-09-30 12:35:46.083497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:34.397 [2024-09-30 12:35:46.083632] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:34.397 [2024-09-30 12:35:46.083672] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:34.397 [2024-09-30 12:35:46.083831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.397 BaseBdev2 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.397 [ 00:18:34.397 { 00:18:34.397 "name": "BaseBdev2", 00:18:34.397 "aliases": [ 00:18:34.397 "e5b8b303-8d61-46fe-a1fb-0c97292d49d4" 00:18:34.397 ], 00:18:34.397 "product_name": "Malloc disk", 00:18:34.397 "block_size": 4096, 00:18:34.397 "num_blocks": 8192, 00:18:34.397 "uuid": "e5b8b303-8d61-46fe-a1fb-0c97292d49d4", 00:18:34.397 "md_size": 32, 00:18:34.397 "md_interleave": false, 00:18:34.397 "dif_type": 0, 00:18:34.397 "assigned_rate_limits": { 00:18:34.397 "rw_ios_per_sec": 0, 00:18:34.397 "rw_mbytes_per_sec": 0, 00:18:34.397 "r_mbytes_per_sec": 0, 00:18:34.397 "w_mbytes_per_sec": 0 00:18:34.397 }, 00:18:34.397 "claimed": true, 00:18:34.397 "claim_type": "exclusive_write", 00:18:34.397 "zoned": false, 00:18:34.397 "supported_io_types": { 00:18:34.397 "read": true, 00:18:34.397 "write": true, 00:18:34.397 "unmap": true, 00:18:34.397 "flush": true, 00:18:34.397 "reset": true, 00:18:34.397 "nvme_admin": false, 00:18:34.397 "nvme_io": false, 00:18:34.397 "nvme_io_md": false, 00:18:34.397 "write_zeroes": true, 00:18:34.397 "zcopy": true, 00:18:34.397 "get_zone_info": false, 00:18:34.397 "zone_management": false, 00:18:34.397 "zone_append": false, 00:18:34.397 "compare": false, 00:18:34.397 "compare_and_write": false, 00:18:34.397 "abort": true, 00:18:34.397 "seek_hole": false, 00:18:34.397 "seek_data": false, 00:18:34.397 "copy": true, 00:18:34.397 "nvme_iov_md": false 00:18:34.397 }, 00:18:34.397 "memory_domains": [ 00:18:34.397 { 00:18:34.397 "dma_device_id": "system", 00:18:34.397 "dma_device_type": 1 00:18:34.397 }, 00:18:34.397 { 00:18:34.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.397 "dma_device_type": 2 00:18:34.397 } 00:18:34.397 ], 00:18:34.397 "driver_specific": {} 00:18:34.397 } 00:18:34.397 ] 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.397 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.397 "name": "Existed_Raid", 00:18:34.397 "uuid": "506c917c-a391-492c-a84c-4572f9eab182", 00:18:34.397 "strip_size_kb": 0, 00:18:34.397 "state": "online", 00:18:34.397 "raid_level": "raid1", 00:18:34.397 "superblock": true, 00:18:34.397 "num_base_bdevs": 2, 00:18:34.397 "num_base_bdevs_discovered": 2, 00:18:34.397 "num_base_bdevs_operational": 2, 00:18:34.397 "base_bdevs_list": [ 00:18:34.397 { 00:18:34.397 "name": "BaseBdev1", 00:18:34.397 "uuid": "96036fd5-f84e-4b2f-8dd4-1a5919fe12a6", 00:18:34.398 "is_configured": true, 00:18:34.398 "data_offset": 256, 00:18:34.398 "data_size": 7936 00:18:34.398 }, 00:18:34.398 { 00:18:34.398 "name": "BaseBdev2", 00:18:34.398 "uuid": "e5b8b303-8d61-46fe-a1fb-0c97292d49d4", 00:18:34.398 "is_configured": true, 00:18:34.398 "data_offset": 256, 00:18:34.398 "data_size": 7936 00:18:34.398 } 00:18:34.398 ] 00:18:34.398 }' 00:18:34.398 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.398 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.967 [2024-09-30 12:35:46.594627] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:34.967 "name": "Existed_Raid", 00:18:34.967 "aliases": [ 00:18:34.967 "506c917c-a391-492c-a84c-4572f9eab182" 00:18:34.967 ], 00:18:34.967 "product_name": "Raid Volume", 00:18:34.967 "block_size": 4096, 00:18:34.967 "num_blocks": 7936, 00:18:34.967 "uuid": "506c917c-a391-492c-a84c-4572f9eab182", 00:18:34.967 "md_size": 32, 00:18:34.967 "md_interleave": false, 00:18:34.967 "dif_type": 0, 00:18:34.967 "assigned_rate_limits": { 00:18:34.967 "rw_ios_per_sec": 0, 00:18:34.967 "rw_mbytes_per_sec": 0, 00:18:34.967 "r_mbytes_per_sec": 0, 00:18:34.967 "w_mbytes_per_sec": 0 00:18:34.967 }, 00:18:34.967 "claimed": false, 00:18:34.967 "zoned": false, 00:18:34.967 "supported_io_types": { 00:18:34.967 "read": true, 00:18:34.967 "write": true, 00:18:34.967 "unmap": false, 00:18:34.967 "flush": false, 00:18:34.967 "reset": true, 00:18:34.967 "nvme_admin": false, 00:18:34.967 "nvme_io": false, 00:18:34.967 "nvme_io_md": false, 00:18:34.967 "write_zeroes": true, 00:18:34.967 "zcopy": false, 00:18:34.967 "get_zone_info": false, 00:18:34.967 "zone_management": false, 00:18:34.967 "zone_append": false, 00:18:34.967 "compare": false, 00:18:34.967 "compare_and_write": false, 00:18:34.967 "abort": false, 00:18:34.967 "seek_hole": false, 00:18:34.967 "seek_data": false, 00:18:34.967 "copy": false, 00:18:34.967 "nvme_iov_md": false 00:18:34.967 }, 00:18:34.967 "memory_domains": [ 00:18:34.967 { 00:18:34.967 "dma_device_id": "system", 00:18:34.967 "dma_device_type": 1 00:18:34.967 }, 00:18:34.967 { 00:18:34.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.967 "dma_device_type": 2 00:18:34.967 }, 00:18:34.967 { 00:18:34.967 "dma_device_id": "system", 00:18:34.967 "dma_device_type": 1 00:18:34.967 }, 00:18:34.967 { 00:18:34.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.967 "dma_device_type": 2 00:18:34.967 } 00:18:34.967 ], 00:18:34.967 "driver_specific": { 00:18:34.967 "raid": { 00:18:34.967 "uuid": "506c917c-a391-492c-a84c-4572f9eab182", 00:18:34.967 "strip_size_kb": 0, 00:18:34.967 "state": "online", 00:18:34.967 "raid_level": "raid1", 00:18:34.967 "superblock": true, 00:18:34.967 "num_base_bdevs": 2, 00:18:34.967 "num_base_bdevs_discovered": 2, 00:18:34.967 "num_base_bdevs_operational": 2, 00:18:34.967 "base_bdevs_list": [ 00:18:34.967 { 00:18:34.967 "name": "BaseBdev1", 00:18:34.967 "uuid": "96036fd5-f84e-4b2f-8dd4-1a5919fe12a6", 00:18:34.967 "is_configured": true, 00:18:34.967 "data_offset": 256, 00:18:34.967 "data_size": 7936 00:18:34.967 }, 00:18:34.967 { 00:18:34.967 "name": "BaseBdev2", 00:18:34.967 "uuid": "e5b8b303-8d61-46fe-a1fb-0c97292d49d4", 00:18:34.967 "is_configured": true, 00:18:34.967 "data_offset": 256, 00:18:34.967 "data_size": 7936 00:18:34.967 } 00:18:34.967 ] 00:18:34.967 } 00:18:34.967 } 00:18:34.967 }' 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:34.967 BaseBdev2' 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.967 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.968 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.968 [2024-09-30 12:35:46.809916] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.228 "name": "Existed_Raid", 00:18:35.228 "uuid": "506c917c-a391-492c-a84c-4572f9eab182", 00:18:35.228 "strip_size_kb": 0, 00:18:35.228 "state": "online", 00:18:35.228 "raid_level": "raid1", 00:18:35.228 "superblock": true, 00:18:35.228 "num_base_bdevs": 2, 00:18:35.228 "num_base_bdevs_discovered": 1, 00:18:35.228 "num_base_bdevs_operational": 1, 00:18:35.228 "base_bdevs_list": [ 00:18:35.228 { 00:18:35.228 "name": null, 00:18:35.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.228 "is_configured": false, 00:18:35.228 "data_offset": 0, 00:18:35.228 "data_size": 7936 00:18:35.228 }, 00:18:35.228 { 00:18:35.228 "name": "BaseBdev2", 00:18:35.228 "uuid": "e5b8b303-8d61-46fe-a1fb-0c97292d49d4", 00:18:35.228 "is_configured": true, 00:18:35.228 "data_offset": 256, 00:18:35.228 "data_size": 7936 00:18:35.228 } 00:18:35.228 ] 00:18:35.228 }' 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.228 12:35:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.488 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.488 [2024-09-30 12:35:47.371714] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:35.488 [2024-09-30 12:35:47.371869] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:35.749 [2024-09-30 12:35:47.468604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.749 [2024-09-30 12:35:47.468727] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.749 [2024-09-30 12:35:47.468801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87049 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87049 ']' 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87049 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87049 00:18:35.749 killing process with pid 87049 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87049' 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87049 00:18:35.749 [2024-09-30 12:35:47.561598] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:35.749 12:35:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87049 00:18:35.749 [2024-09-30 12:35:47.576068] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.132 12:35:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:37.132 00:18:37.132 real 0m5.223s 00:18:37.132 user 0m7.471s 00:18:37.132 sys 0m0.921s 00:18:37.132 ************************************ 00:18:37.132 END TEST raid_state_function_test_sb_md_separate 00:18:37.132 ************************************ 00:18:37.132 12:35:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.132 12:35:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.132 12:35:48 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:37.132 12:35:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:37.132 12:35:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.132 12:35:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.132 ************************************ 00:18:37.132 START TEST raid_superblock_test_md_separate 00:18:37.132 ************************************ 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87304 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87304 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87304 ']' 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.132 12:35:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.132 [2024-09-30 12:35:48.933164] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:37.132 [2024-09-30 12:35:48.933307] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87304 ] 00:18:37.394 [2024-09-30 12:35:49.101895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.655 [2024-09-30 12:35:49.295167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.655 [2024-09-30 12:35:49.490671] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.655 [2024-09-30 12:35:49.490815] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.916 malloc1 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.916 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.916 [2024-09-30 12:35:49.810348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:37.916 [2024-09-30 12:35:49.810450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.916 [2024-09-30 12:35:49.810493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:37.916 [2024-09-30 12:35:49.810521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.176 [2024-09-30 12:35:49.812287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.176 [2024-09-30 12:35:49.812368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:38.176 pt1 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.176 malloc2 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.176 [2024-09-30 12:35:49.909633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:38.176 [2024-09-30 12:35:49.909728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.176 [2024-09-30 12:35:49.909781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:38.176 [2024-09-30 12:35:49.909809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.176 [2024-09-30 12:35:49.911523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.176 [2024-09-30 12:35:49.911620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:38.176 pt2 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.176 [2024-09-30 12:35:49.921676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:38.176 [2024-09-30 12:35:49.923315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.176 [2024-09-30 12:35:49.923468] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:38.176 [2024-09-30 12:35:49.923480] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:38.176 [2024-09-30 12:35:49.923553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:38.176 [2024-09-30 12:35:49.923680] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:38.176 [2024-09-30 12:35:49.923691] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:38.176 [2024-09-30 12:35:49.923799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.176 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.177 "name": "raid_bdev1", 00:18:38.177 "uuid": "eb664f27-31f4-4171-a970-056ac66c5649", 00:18:38.177 "strip_size_kb": 0, 00:18:38.177 "state": "online", 00:18:38.177 "raid_level": "raid1", 00:18:38.177 "superblock": true, 00:18:38.177 "num_base_bdevs": 2, 00:18:38.177 "num_base_bdevs_discovered": 2, 00:18:38.177 "num_base_bdevs_operational": 2, 00:18:38.177 "base_bdevs_list": [ 00:18:38.177 { 00:18:38.177 "name": "pt1", 00:18:38.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.177 "is_configured": true, 00:18:38.177 "data_offset": 256, 00:18:38.177 "data_size": 7936 00:18:38.177 }, 00:18:38.177 { 00:18:38.177 "name": "pt2", 00:18:38.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.177 "is_configured": true, 00:18:38.177 "data_offset": 256, 00:18:38.177 "data_size": 7936 00:18:38.177 } 00:18:38.177 ] 00:18:38.177 }' 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.177 12:35:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:38.747 [2024-09-30 12:35:50.369076] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.747 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:38.747 "name": "raid_bdev1", 00:18:38.747 "aliases": [ 00:18:38.747 "eb664f27-31f4-4171-a970-056ac66c5649" 00:18:38.747 ], 00:18:38.747 "product_name": "Raid Volume", 00:18:38.747 "block_size": 4096, 00:18:38.747 "num_blocks": 7936, 00:18:38.747 "uuid": "eb664f27-31f4-4171-a970-056ac66c5649", 00:18:38.747 "md_size": 32, 00:18:38.747 "md_interleave": false, 00:18:38.747 "dif_type": 0, 00:18:38.747 "assigned_rate_limits": { 00:18:38.747 "rw_ios_per_sec": 0, 00:18:38.747 "rw_mbytes_per_sec": 0, 00:18:38.747 "r_mbytes_per_sec": 0, 00:18:38.747 "w_mbytes_per_sec": 0 00:18:38.747 }, 00:18:38.747 "claimed": false, 00:18:38.747 "zoned": false, 00:18:38.747 "supported_io_types": { 00:18:38.747 "read": true, 00:18:38.747 "write": true, 00:18:38.747 "unmap": false, 00:18:38.747 "flush": false, 00:18:38.747 "reset": true, 00:18:38.747 "nvme_admin": false, 00:18:38.747 "nvme_io": false, 00:18:38.747 "nvme_io_md": false, 00:18:38.747 "write_zeroes": true, 00:18:38.747 "zcopy": false, 00:18:38.747 "get_zone_info": false, 00:18:38.747 "zone_management": false, 00:18:38.747 "zone_append": false, 00:18:38.747 "compare": false, 00:18:38.747 "compare_and_write": false, 00:18:38.747 "abort": false, 00:18:38.747 "seek_hole": false, 00:18:38.747 "seek_data": false, 00:18:38.747 "copy": false, 00:18:38.747 "nvme_iov_md": false 00:18:38.747 }, 00:18:38.747 "memory_domains": [ 00:18:38.747 { 00:18:38.747 "dma_device_id": "system", 00:18:38.747 "dma_device_type": 1 00:18:38.747 }, 00:18:38.747 { 00:18:38.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.747 "dma_device_type": 2 00:18:38.747 }, 00:18:38.747 { 00:18:38.747 "dma_device_id": "system", 00:18:38.747 "dma_device_type": 1 00:18:38.747 }, 00:18:38.747 { 00:18:38.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.747 "dma_device_type": 2 00:18:38.747 } 00:18:38.747 ], 00:18:38.747 "driver_specific": { 00:18:38.747 "raid": { 00:18:38.747 "uuid": "eb664f27-31f4-4171-a970-056ac66c5649", 00:18:38.747 "strip_size_kb": 0, 00:18:38.747 "state": "online", 00:18:38.747 "raid_level": "raid1", 00:18:38.747 "superblock": true, 00:18:38.747 "num_base_bdevs": 2, 00:18:38.748 "num_base_bdevs_discovered": 2, 00:18:38.748 "num_base_bdevs_operational": 2, 00:18:38.748 "base_bdevs_list": [ 00:18:38.748 { 00:18:38.748 "name": "pt1", 00:18:38.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.748 "is_configured": true, 00:18:38.748 "data_offset": 256, 00:18:38.748 "data_size": 7936 00:18:38.748 }, 00:18:38.748 { 00:18:38.748 "name": "pt2", 00:18:38.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.748 "is_configured": true, 00:18:38.748 "data_offset": 256, 00:18:38.748 "data_size": 7936 00:18:38.748 } 00:18:38.748 ] 00:18:38.748 } 00:18:38.748 } 00:18:38.748 }' 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:38.748 pt2' 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:38.748 [2024-09-30 12:35:50.616662] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.748 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eb664f27-31f4-4171-a970-056ac66c5649 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z eb664f27-31f4-4171-a970-056ac66c5649 ']' 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.009 [2024-09-30 12:35:50.664360] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.009 [2024-09-30 12:35:50.664382] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.009 [2024-09-30 12:35:50.664440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.009 [2024-09-30 12:35:50.664484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.009 [2024-09-30 12:35:50.664494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.009 [2024-09-30 12:35:50.812138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:39.009 [2024-09-30 12:35:50.813832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:39.009 [2024-09-30 12:35:50.813900] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:39.009 [2024-09-30 12:35:50.813941] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:39.009 [2024-09-30 12:35:50.813954] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.009 [2024-09-30 12:35:50.813963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:39.009 request: 00:18:39.009 { 00:18:39.009 "name": "raid_bdev1", 00:18:39.009 "raid_level": "raid1", 00:18:39.009 "base_bdevs": [ 00:18:39.009 "malloc1", 00:18:39.009 "malloc2" 00:18:39.009 ], 00:18:39.009 "superblock": false, 00:18:39.009 "method": "bdev_raid_create", 00:18:39.009 "req_id": 1 00:18:39.009 } 00:18:39.009 Got JSON-RPC error response 00:18:39.009 response: 00:18:39.009 { 00:18:39.009 "code": -17, 00:18:39.009 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:39.009 } 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.009 [2024-09-30 12:35:50.879989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:39.009 [2024-09-30 12:35:50.880069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.009 [2024-09-30 12:35:50.880099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:39.009 [2024-09-30 12:35:50.880126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.009 [2024-09-30 12:35:50.881992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.009 [2024-09-30 12:35:50.882058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:39.009 [2024-09-30 12:35:50.882118] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:39.009 [2024-09-30 12:35:50.882181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:39.009 pt1 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.009 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.010 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.273 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.273 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.273 "name": "raid_bdev1", 00:18:39.273 "uuid": "eb664f27-31f4-4171-a970-056ac66c5649", 00:18:39.273 "strip_size_kb": 0, 00:18:39.273 "state": "configuring", 00:18:39.273 "raid_level": "raid1", 00:18:39.273 "superblock": true, 00:18:39.273 "num_base_bdevs": 2, 00:18:39.273 "num_base_bdevs_discovered": 1, 00:18:39.273 "num_base_bdevs_operational": 2, 00:18:39.273 "base_bdevs_list": [ 00:18:39.273 { 00:18:39.273 "name": "pt1", 00:18:39.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.273 "is_configured": true, 00:18:39.273 "data_offset": 256, 00:18:39.273 "data_size": 7936 00:18:39.273 }, 00:18:39.273 { 00:18:39.273 "name": null, 00:18:39.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.273 "is_configured": false, 00:18:39.273 "data_offset": 256, 00:18:39.273 "data_size": 7936 00:18:39.273 } 00:18:39.273 ] 00:18:39.273 }' 00:18:39.273 12:35:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.273 12:35:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.543 [2024-09-30 12:35:51.319653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:39.543 [2024-09-30 12:35:51.319755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.543 [2024-09-30 12:35:51.319777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:39.543 [2024-09-30 12:35:51.319786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.543 [2024-09-30 12:35:51.319950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.543 [2024-09-30 12:35:51.319964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:39.543 [2024-09-30 12:35:51.320000] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:39.543 [2024-09-30 12:35:51.320018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:39.543 [2024-09-30 12:35:51.320113] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:39.543 [2024-09-30 12:35:51.320122] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:39.543 [2024-09-30 12:35:51.320180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:39.543 [2024-09-30 12:35:51.320284] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:39.543 [2024-09-30 12:35:51.320291] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:39.543 [2024-09-30 12:35:51.320372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.543 pt2 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.543 "name": "raid_bdev1", 00:18:39.543 "uuid": "eb664f27-31f4-4171-a970-056ac66c5649", 00:18:39.543 "strip_size_kb": 0, 00:18:39.543 "state": "online", 00:18:39.543 "raid_level": "raid1", 00:18:39.543 "superblock": true, 00:18:39.543 "num_base_bdevs": 2, 00:18:39.543 "num_base_bdevs_discovered": 2, 00:18:39.543 "num_base_bdevs_operational": 2, 00:18:39.543 "base_bdevs_list": [ 00:18:39.543 { 00:18:39.543 "name": "pt1", 00:18:39.543 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.543 "is_configured": true, 00:18:39.543 "data_offset": 256, 00:18:39.543 "data_size": 7936 00:18:39.543 }, 00:18:39.543 { 00:18:39.543 "name": "pt2", 00:18:39.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.543 "is_configured": true, 00:18:39.543 "data_offset": 256, 00:18:39.543 "data_size": 7936 00:18:39.543 } 00:18:39.543 ] 00:18:39.543 }' 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.543 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.158 [2024-09-30 12:35:51.815257] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.158 "name": "raid_bdev1", 00:18:40.158 "aliases": [ 00:18:40.158 "eb664f27-31f4-4171-a970-056ac66c5649" 00:18:40.158 ], 00:18:40.158 "product_name": "Raid Volume", 00:18:40.158 "block_size": 4096, 00:18:40.158 "num_blocks": 7936, 00:18:40.158 "uuid": "eb664f27-31f4-4171-a970-056ac66c5649", 00:18:40.158 "md_size": 32, 00:18:40.158 "md_interleave": false, 00:18:40.158 "dif_type": 0, 00:18:40.158 "assigned_rate_limits": { 00:18:40.158 "rw_ios_per_sec": 0, 00:18:40.158 "rw_mbytes_per_sec": 0, 00:18:40.158 "r_mbytes_per_sec": 0, 00:18:40.158 "w_mbytes_per_sec": 0 00:18:40.158 }, 00:18:40.158 "claimed": false, 00:18:40.158 "zoned": false, 00:18:40.158 "supported_io_types": { 00:18:40.158 "read": true, 00:18:40.158 "write": true, 00:18:40.158 "unmap": false, 00:18:40.158 "flush": false, 00:18:40.158 "reset": true, 00:18:40.158 "nvme_admin": false, 00:18:40.158 "nvme_io": false, 00:18:40.158 "nvme_io_md": false, 00:18:40.158 "write_zeroes": true, 00:18:40.158 "zcopy": false, 00:18:40.158 "get_zone_info": false, 00:18:40.158 "zone_management": false, 00:18:40.158 "zone_append": false, 00:18:40.158 "compare": false, 00:18:40.158 "compare_and_write": false, 00:18:40.158 "abort": false, 00:18:40.158 "seek_hole": false, 00:18:40.158 "seek_data": false, 00:18:40.158 "copy": false, 00:18:40.158 "nvme_iov_md": false 00:18:40.158 }, 00:18:40.158 "memory_domains": [ 00:18:40.158 { 00:18:40.158 "dma_device_id": "system", 00:18:40.158 "dma_device_type": 1 00:18:40.158 }, 00:18:40.158 { 00:18:40.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.158 "dma_device_type": 2 00:18:40.158 }, 00:18:40.158 { 00:18:40.158 "dma_device_id": "system", 00:18:40.158 "dma_device_type": 1 00:18:40.158 }, 00:18:40.158 { 00:18:40.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.158 "dma_device_type": 2 00:18:40.158 } 00:18:40.158 ], 00:18:40.158 "driver_specific": { 00:18:40.158 "raid": { 00:18:40.158 "uuid": "eb664f27-31f4-4171-a970-056ac66c5649", 00:18:40.158 "strip_size_kb": 0, 00:18:40.158 "state": "online", 00:18:40.158 "raid_level": "raid1", 00:18:40.158 "superblock": true, 00:18:40.158 "num_base_bdevs": 2, 00:18:40.158 "num_base_bdevs_discovered": 2, 00:18:40.158 "num_base_bdevs_operational": 2, 00:18:40.158 "base_bdevs_list": [ 00:18:40.158 { 00:18:40.158 "name": "pt1", 00:18:40.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.158 "is_configured": true, 00:18:40.158 "data_offset": 256, 00:18:40.158 "data_size": 7936 00:18:40.158 }, 00:18:40.158 { 00:18:40.158 "name": "pt2", 00:18:40.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.158 "is_configured": true, 00:18:40.158 "data_offset": 256, 00:18:40.158 "data_size": 7936 00:18:40.158 } 00:18:40.158 ] 00:18:40.158 } 00:18:40.158 } 00:18:40.158 }' 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:40.158 pt2' 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.158 12:35:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.158 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.158 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:40.158 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:40.158 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.158 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.158 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.158 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:40.158 [2024-09-30 12:35:52.030913] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.158 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' eb664f27-31f4-4171-a970-056ac66c5649 '!=' eb664f27-31f4-4171-a970-056ac66c5649 ']' 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.418 [2024-09-30 12:35:52.082635] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.418 "name": "raid_bdev1", 00:18:40.418 "uuid": "eb664f27-31f4-4171-a970-056ac66c5649", 00:18:40.418 "strip_size_kb": 0, 00:18:40.418 "state": "online", 00:18:40.418 "raid_level": "raid1", 00:18:40.418 "superblock": true, 00:18:40.418 "num_base_bdevs": 2, 00:18:40.418 "num_base_bdevs_discovered": 1, 00:18:40.418 "num_base_bdevs_operational": 1, 00:18:40.418 "base_bdevs_list": [ 00:18:40.418 { 00:18:40.418 "name": null, 00:18:40.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.418 "is_configured": false, 00:18:40.418 "data_offset": 0, 00:18:40.418 "data_size": 7936 00:18:40.418 }, 00:18:40.418 { 00:18:40.418 "name": "pt2", 00:18:40.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.418 "is_configured": true, 00:18:40.418 "data_offset": 256, 00:18:40.418 "data_size": 7936 00:18:40.418 } 00:18:40.418 ] 00:18:40.418 }' 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.418 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.679 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:40.679 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.679 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.679 [2024-09-30 12:35:52.533812] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.679 [2024-09-30 12:35:52.533834] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.679 [2024-09-30 12:35:52.533884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.679 [2024-09-30 12:35:52.533919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.679 [2024-09-30 12:35:52.533928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:40.679 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.679 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.679 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:40.679 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.679 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.679 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.939 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.939 [2024-09-30 12:35:52.609679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.939 [2024-09-30 12:35:52.609730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.939 [2024-09-30 12:35:52.609757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:40.939 [2024-09-30 12:35:52.609768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.939 [2024-09-30 12:35:52.611649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.939 [2024-09-30 12:35:52.611691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.939 [2024-09-30 12:35:52.611733] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:40.939 [2024-09-30 12:35:52.611797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.939 [2024-09-30 12:35:52.611888] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:40.939 [2024-09-30 12:35:52.611899] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:40.939 [2024-09-30 12:35:52.611965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:40.939 [2024-09-30 12:35:52.612074] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:40.939 [2024-09-30 12:35:52.612081] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:40.939 [2024-09-30 12:35:52.612161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.939 pt2 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.940 "name": "raid_bdev1", 00:18:40.940 "uuid": "eb664f27-31f4-4171-a970-056ac66c5649", 00:18:40.940 "strip_size_kb": 0, 00:18:40.940 "state": "online", 00:18:40.940 "raid_level": "raid1", 00:18:40.940 "superblock": true, 00:18:40.940 "num_base_bdevs": 2, 00:18:40.940 "num_base_bdevs_discovered": 1, 00:18:40.940 "num_base_bdevs_operational": 1, 00:18:40.940 "base_bdevs_list": [ 00:18:40.940 { 00:18:40.940 "name": null, 00:18:40.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.940 "is_configured": false, 00:18:40.940 "data_offset": 256, 00:18:40.940 "data_size": 7936 00:18:40.940 }, 00:18:40.940 { 00:18:40.940 "name": "pt2", 00:18:40.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.940 "is_configured": true, 00:18:40.940 "data_offset": 256, 00:18:40.940 "data_size": 7936 00:18:40.940 } 00:18:40.940 ] 00:18:40.940 }' 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.940 12:35:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.200 [2024-09-30 12:35:53.028923] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.200 [2024-09-30 12:35:53.028989] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.200 [2024-09-30 12:35:53.029048] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.200 [2024-09-30 12:35:53.029097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.200 [2024-09-30 12:35:53.029126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.200 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.200 [2024-09-30 12:35:53.092847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:41.200 [2024-09-30 12:35:53.092925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.200 [2024-09-30 12:35:53.092955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:41.200 [2024-09-30 12:35:53.092977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.200 [2024-09-30 12:35:53.094822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.200 [2024-09-30 12:35:53.094883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:41.200 [2024-09-30 12:35:53.094946] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:41.200 [2024-09-30 12:35:53.095010] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:41.200 [2024-09-30 12:35:53.095132] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:41.200 [2024-09-30 12:35:53.095179] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.200 [2024-09-30 12:35:53.095216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:41.200 [2024-09-30 12:35:53.095316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.200 [2024-09-30 12:35:53.095407] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:41.200 [2024-09-30 12:35:53.095440] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:41.200 [2024-09-30 12:35:53.095519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:41.200 [2024-09-30 12:35:53.095643] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:41.200 [2024-09-30 12:35:53.095680] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:41.200 [2024-09-30 12:35:53.095819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.460 pt1 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.460 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.460 "name": "raid_bdev1", 00:18:41.460 "uuid": "eb664f27-31f4-4171-a970-056ac66c5649", 00:18:41.460 "strip_size_kb": 0, 00:18:41.460 "state": "online", 00:18:41.460 "raid_level": "raid1", 00:18:41.460 "superblock": true, 00:18:41.460 "num_base_bdevs": 2, 00:18:41.460 "num_base_bdevs_discovered": 1, 00:18:41.460 "num_base_bdevs_operational": 1, 00:18:41.460 "base_bdevs_list": [ 00:18:41.460 { 00:18:41.460 "name": null, 00:18:41.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.460 "is_configured": false, 00:18:41.460 "data_offset": 256, 00:18:41.460 "data_size": 7936 00:18:41.460 }, 00:18:41.461 { 00:18:41.461 "name": "pt2", 00:18:41.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.461 "is_configured": true, 00:18:41.461 "data_offset": 256, 00:18:41.461 "data_size": 7936 00:18:41.461 } 00:18:41.461 ] 00:18:41.461 }' 00:18:41.461 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.461 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.721 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:41.721 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:41.721 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.721 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.721 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.721 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:41.721 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.721 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:41.721 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.721 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.721 [2024-09-30 12:35:53.612322] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' eb664f27-31f4-4171-a970-056ac66c5649 '!=' eb664f27-31f4-4171-a970-056ac66c5649 ']' 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87304 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87304 ']' 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 87304 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87304 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:41.982 killing process with pid 87304 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87304' 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 87304 00:18:41.982 [2024-09-30 12:35:53.675553] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.982 [2024-09-30 12:35:53.675627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.982 [2024-09-30 12:35:53.675661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.982 [2024-09-30 12:35:53.675673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:41.982 12:35:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 87304 00:18:42.242 [2024-09-30 12:35:53.883294] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.182 12:35:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:43.182 00:18:43.182 real 0m6.234s 00:18:43.182 user 0m9.314s 00:18:43.182 sys 0m1.188s 00:18:43.182 ************************************ 00:18:43.182 END TEST raid_superblock_test_md_separate 00:18:43.182 ************************************ 00:18:43.182 12:35:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:43.182 12:35:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.442 12:35:55 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:43.442 12:35:55 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:43.442 12:35:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:43.442 12:35:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:43.442 12:35:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.442 ************************************ 00:18:43.442 START TEST raid_rebuild_test_sb_md_separate 00:18:43.442 ************************************ 00:18:43.442 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:18:43.442 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:43.442 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:43.442 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:43.442 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:43.442 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:43.442 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:43.442 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:43.442 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87627 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87627 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87627 ']' 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.443 12:35:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.443 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:43.443 Zero copy mechanism will not be used. 00:18:43.443 [2024-09-30 12:35:55.261173] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:43.443 [2024-09-30 12:35:55.261304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87627 ] 00:18:43.703 [2024-09-30 12:35:55.431586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.963 [2024-09-30 12:35:55.629451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.963 [2024-09-30 12:35:55.822411] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.963 [2024-09-30 12:35:55.822520] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:44.223 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.223 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:44.223 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:44.223 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:44.223 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.223 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.223 BaseBdev1_malloc 00:18:44.223 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.223 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:44.223 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.223 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.484 [2024-09-30 12:35:56.121086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:44.484 [2024-09-30 12:35:56.121333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.484 [2024-09-30 12:35:56.121364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:44.484 [2024-09-30 12:35:56.121375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.484 [2024-09-30 12:35:56.123197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.484 [2024-09-30 12:35:56.123237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:44.484 BaseBdev1 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.484 BaseBdev2_malloc 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.484 [2024-09-30 12:35:56.206464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:44.484 [2024-09-30 12:35:56.206556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.484 [2024-09-30 12:35:56.206578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:44.484 [2024-09-30 12:35:56.206588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.484 [2024-09-30 12:35:56.208359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.484 [2024-09-30 12:35:56.208400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:44.484 BaseBdev2 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.484 spare_malloc 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.484 spare_delay 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.484 [2024-09-30 12:35:56.272658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:44.484 [2024-09-30 12:35:56.272714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.484 [2024-09-30 12:35:56.272733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:44.484 [2024-09-30 12:35:56.272755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.484 [2024-09-30 12:35:56.274512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.484 [2024-09-30 12:35:56.274586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:44.484 spare 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.484 [2024-09-30 12:35:56.284690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.484 [2024-09-30 12:35:56.286346] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:44.484 [2024-09-30 12:35:56.286542] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:44.484 [2024-09-30 12:35:56.286578] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:44.484 [2024-09-30 12:35:56.286670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:44.484 [2024-09-30 12:35:56.286829] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:44.484 [2024-09-30 12:35:56.286868] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:44.484 [2024-09-30 12:35:56.286992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.484 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.485 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.485 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.485 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.485 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.485 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.485 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.485 "name": "raid_bdev1", 00:18:44.485 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:44.485 "strip_size_kb": 0, 00:18:44.485 "state": "online", 00:18:44.485 "raid_level": "raid1", 00:18:44.485 "superblock": true, 00:18:44.485 "num_base_bdevs": 2, 00:18:44.485 "num_base_bdevs_discovered": 2, 00:18:44.485 "num_base_bdevs_operational": 2, 00:18:44.485 "base_bdevs_list": [ 00:18:44.485 { 00:18:44.485 "name": "BaseBdev1", 00:18:44.485 "uuid": "51ed517d-f469-58e1-a394-aeed4f2fd960", 00:18:44.485 "is_configured": true, 00:18:44.485 "data_offset": 256, 00:18:44.485 "data_size": 7936 00:18:44.485 }, 00:18:44.485 { 00:18:44.485 "name": "BaseBdev2", 00:18:44.485 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:44.485 "is_configured": true, 00:18:44.485 "data_offset": 256, 00:18:44.485 "data_size": 7936 00:18:44.485 } 00:18:44.485 ] 00:18:44.485 }' 00:18:44.485 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.485 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:45.055 [2024-09-30 12:35:56.748090] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:45.055 12:35:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:45.314 [2024-09-30 12:35:57.027635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:45.314 /dev/nbd0 00:18:45.314 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:45.314 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:45.314 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:45.314 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:45.314 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:45.314 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:45.314 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:45.314 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:45.314 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:45.314 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:45.314 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.314 1+0 records in 00:18:45.315 1+0 records out 00:18:45.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521429 s, 7.9 MB/s 00:18:45.315 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.315 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:45.315 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.315 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:45.315 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:45.315 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.315 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:45.315 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:45.315 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:45.315 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:45.884 7936+0 records in 00:18:45.884 7936+0 records out 00:18:45.884 32505856 bytes (33 MB, 31 MiB) copied, 0.630351 s, 51.6 MB/s 00:18:45.884 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:45.884 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.884 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:45.884 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:45.884 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:45.884 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.884 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:46.144 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.144 [2024-09-30 12:35:57.945630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.144 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.144 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.144 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.144 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.144 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.144 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:46.144 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.144 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:46.144 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.145 [2024-09-30 12:35:57.961680] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.145 12:35:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.145 12:35:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.145 "name": "raid_bdev1", 00:18:46.145 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:46.145 "strip_size_kb": 0, 00:18:46.145 "state": "online", 00:18:46.145 "raid_level": "raid1", 00:18:46.145 "superblock": true, 00:18:46.145 "num_base_bdevs": 2, 00:18:46.145 "num_base_bdevs_discovered": 1, 00:18:46.145 "num_base_bdevs_operational": 1, 00:18:46.145 "base_bdevs_list": [ 00:18:46.145 { 00:18:46.145 "name": null, 00:18:46.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.145 "is_configured": false, 00:18:46.145 "data_offset": 0, 00:18:46.145 "data_size": 7936 00:18:46.145 }, 00:18:46.145 { 00:18:46.145 "name": "BaseBdev2", 00:18:46.145 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:46.145 "is_configured": true, 00:18:46.145 "data_offset": 256, 00:18:46.145 "data_size": 7936 00:18:46.145 } 00:18:46.145 ] 00:18:46.145 }' 00:18:46.145 12:35:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.145 12:35:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.714 12:35:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:46.714 12:35:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.714 12:35:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.714 [2024-09-30 12:35:58.420880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.714 [2024-09-30 12:35:58.433213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:46.714 12:35:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.714 12:35:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:46.714 [2024-09-30 12:35:58.434965] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:47.654 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.654 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.654 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.654 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.654 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.654 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.655 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.655 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.655 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.655 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.655 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.655 "name": "raid_bdev1", 00:18:47.655 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:47.655 "strip_size_kb": 0, 00:18:47.655 "state": "online", 00:18:47.655 "raid_level": "raid1", 00:18:47.655 "superblock": true, 00:18:47.655 "num_base_bdevs": 2, 00:18:47.655 "num_base_bdevs_discovered": 2, 00:18:47.655 "num_base_bdevs_operational": 2, 00:18:47.655 "process": { 00:18:47.655 "type": "rebuild", 00:18:47.655 "target": "spare", 00:18:47.655 "progress": { 00:18:47.655 "blocks": 2560, 00:18:47.655 "percent": 32 00:18:47.655 } 00:18:47.655 }, 00:18:47.655 "base_bdevs_list": [ 00:18:47.655 { 00:18:47.655 "name": "spare", 00:18:47.655 "uuid": "dbe45825-bd28-5185-a88a-ecb17ece5df0", 00:18:47.655 "is_configured": true, 00:18:47.655 "data_offset": 256, 00:18:47.655 "data_size": 7936 00:18:47.655 }, 00:18:47.655 { 00:18:47.655 "name": "BaseBdev2", 00:18:47.655 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:47.655 "is_configured": true, 00:18:47.655 "data_offset": 256, 00:18:47.655 "data_size": 7936 00:18:47.655 } 00:18:47.655 ] 00:18:47.655 }' 00:18:47.655 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.655 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.655 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.914 [2024-09-30 12:35:59.599236] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.914 [2024-09-30 12:35:59.639699] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:47.914 [2024-09-30 12:35:59.639818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.914 [2024-09-30 12:35:59.639834] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.914 [2024-09-30 12:35:59.639849] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.914 "name": "raid_bdev1", 00:18:47.914 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:47.914 "strip_size_kb": 0, 00:18:47.914 "state": "online", 00:18:47.914 "raid_level": "raid1", 00:18:47.914 "superblock": true, 00:18:47.914 "num_base_bdevs": 2, 00:18:47.914 "num_base_bdevs_discovered": 1, 00:18:47.914 "num_base_bdevs_operational": 1, 00:18:47.914 "base_bdevs_list": [ 00:18:47.914 { 00:18:47.914 "name": null, 00:18:47.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.914 "is_configured": false, 00:18:47.914 "data_offset": 0, 00:18:47.914 "data_size": 7936 00:18:47.914 }, 00:18:47.914 { 00:18:47.914 "name": "BaseBdev2", 00:18:47.914 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:47.914 "is_configured": true, 00:18:47.914 "data_offset": 256, 00:18:47.914 "data_size": 7936 00:18:47.914 } 00:18:47.914 ] 00:18:47.914 }' 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.914 12:35:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.483 "name": "raid_bdev1", 00:18:48.483 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:48.483 "strip_size_kb": 0, 00:18:48.483 "state": "online", 00:18:48.483 "raid_level": "raid1", 00:18:48.483 "superblock": true, 00:18:48.483 "num_base_bdevs": 2, 00:18:48.483 "num_base_bdevs_discovered": 1, 00:18:48.483 "num_base_bdevs_operational": 1, 00:18:48.483 "base_bdevs_list": [ 00:18:48.483 { 00:18:48.483 "name": null, 00:18:48.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.483 "is_configured": false, 00:18:48.483 "data_offset": 0, 00:18:48.483 "data_size": 7936 00:18:48.483 }, 00:18:48.483 { 00:18:48.483 "name": "BaseBdev2", 00:18:48.483 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:48.483 "is_configured": true, 00:18:48.483 "data_offset": 256, 00:18:48.483 "data_size": 7936 00:18:48.483 } 00:18:48.483 ] 00:18:48.483 }' 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.483 [2024-09-30 12:36:00.201806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.483 [2024-09-30 12:36:00.215182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.483 12:36:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:48.484 [2024-09-30 12:36:00.216935] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.422 "name": "raid_bdev1", 00:18:49.422 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:49.422 "strip_size_kb": 0, 00:18:49.422 "state": "online", 00:18:49.422 "raid_level": "raid1", 00:18:49.422 "superblock": true, 00:18:49.422 "num_base_bdevs": 2, 00:18:49.422 "num_base_bdevs_discovered": 2, 00:18:49.422 "num_base_bdevs_operational": 2, 00:18:49.422 "process": { 00:18:49.422 "type": "rebuild", 00:18:49.422 "target": "spare", 00:18:49.422 "progress": { 00:18:49.422 "blocks": 2560, 00:18:49.422 "percent": 32 00:18:49.422 } 00:18:49.422 }, 00:18:49.422 "base_bdevs_list": [ 00:18:49.422 { 00:18:49.422 "name": "spare", 00:18:49.422 "uuid": "dbe45825-bd28-5185-a88a-ecb17ece5df0", 00:18:49.422 "is_configured": true, 00:18:49.422 "data_offset": 256, 00:18:49.422 "data_size": 7936 00:18:49.422 }, 00:18:49.422 { 00:18:49.422 "name": "BaseBdev2", 00:18:49.422 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:49.422 "is_configured": true, 00:18:49.422 "data_offset": 256, 00:18:49.422 "data_size": 7936 00:18:49.422 } 00:18:49.422 ] 00:18:49.422 }' 00:18:49.422 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:49.681 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=706 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.681 "name": "raid_bdev1", 00:18:49.681 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:49.681 "strip_size_kb": 0, 00:18:49.681 "state": "online", 00:18:49.681 "raid_level": "raid1", 00:18:49.681 "superblock": true, 00:18:49.681 "num_base_bdevs": 2, 00:18:49.681 "num_base_bdevs_discovered": 2, 00:18:49.681 "num_base_bdevs_operational": 2, 00:18:49.681 "process": { 00:18:49.681 "type": "rebuild", 00:18:49.681 "target": "spare", 00:18:49.681 "progress": { 00:18:49.681 "blocks": 2816, 00:18:49.681 "percent": 35 00:18:49.681 } 00:18:49.681 }, 00:18:49.681 "base_bdevs_list": [ 00:18:49.681 { 00:18:49.681 "name": "spare", 00:18:49.681 "uuid": "dbe45825-bd28-5185-a88a-ecb17ece5df0", 00:18:49.681 "is_configured": true, 00:18:49.681 "data_offset": 256, 00:18:49.681 "data_size": 7936 00:18:49.681 }, 00:18:49.681 { 00:18:49.681 "name": "BaseBdev2", 00:18:49.681 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:49.681 "is_configured": true, 00:18:49.681 "data_offset": 256, 00:18:49.681 "data_size": 7936 00:18:49.681 } 00:18:49.681 ] 00:18:49.681 }' 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.681 12:36:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.060 "name": "raid_bdev1", 00:18:51.060 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:51.060 "strip_size_kb": 0, 00:18:51.060 "state": "online", 00:18:51.060 "raid_level": "raid1", 00:18:51.060 "superblock": true, 00:18:51.060 "num_base_bdevs": 2, 00:18:51.060 "num_base_bdevs_discovered": 2, 00:18:51.060 "num_base_bdevs_operational": 2, 00:18:51.060 "process": { 00:18:51.060 "type": "rebuild", 00:18:51.060 "target": "spare", 00:18:51.060 "progress": { 00:18:51.060 "blocks": 5888, 00:18:51.060 "percent": 74 00:18:51.060 } 00:18:51.060 }, 00:18:51.060 "base_bdevs_list": [ 00:18:51.060 { 00:18:51.060 "name": "spare", 00:18:51.060 "uuid": "dbe45825-bd28-5185-a88a-ecb17ece5df0", 00:18:51.060 "is_configured": true, 00:18:51.060 "data_offset": 256, 00:18:51.060 "data_size": 7936 00:18:51.060 }, 00:18:51.060 { 00:18:51.060 "name": "BaseBdev2", 00:18:51.060 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:51.060 "is_configured": true, 00:18:51.060 "data_offset": 256, 00:18:51.060 "data_size": 7936 00:18:51.060 } 00:18:51.060 ] 00:18:51.060 }' 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.060 12:36:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:51.629 [2024-09-30 12:36:03.328276] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:51.629 [2024-09-30 12:36:03.328338] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:51.629 [2024-09-30 12:36:03.328446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.887 "name": "raid_bdev1", 00:18:51.887 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:51.887 "strip_size_kb": 0, 00:18:51.887 "state": "online", 00:18:51.887 "raid_level": "raid1", 00:18:51.887 "superblock": true, 00:18:51.887 "num_base_bdevs": 2, 00:18:51.887 "num_base_bdevs_discovered": 2, 00:18:51.887 "num_base_bdevs_operational": 2, 00:18:51.887 "base_bdevs_list": [ 00:18:51.887 { 00:18:51.887 "name": "spare", 00:18:51.887 "uuid": "dbe45825-bd28-5185-a88a-ecb17ece5df0", 00:18:51.887 "is_configured": true, 00:18:51.887 "data_offset": 256, 00:18:51.887 "data_size": 7936 00:18:51.887 }, 00:18:51.887 { 00:18:51.887 "name": "BaseBdev2", 00:18:51.887 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:51.887 "is_configured": true, 00:18:51.887 "data_offset": 256, 00:18:51.887 "data_size": 7936 00:18:51.887 } 00:18:51.887 ] 00:18:51.887 }' 00:18:51.887 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.146 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.147 "name": "raid_bdev1", 00:18:52.147 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:52.147 "strip_size_kb": 0, 00:18:52.147 "state": "online", 00:18:52.147 "raid_level": "raid1", 00:18:52.147 "superblock": true, 00:18:52.147 "num_base_bdevs": 2, 00:18:52.147 "num_base_bdevs_discovered": 2, 00:18:52.147 "num_base_bdevs_operational": 2, 00:18:52.147 "base_bdevs_list": [ 00:18:52.147 { 00:18:52.147 "name": "spare", 00:18:52.147 "uuid": "dbe45825-bd28-5185-a88a-ecb17ece5df0", 00:18:52.147 "is_configured": true, 00:18:52.147 "data_offset": 256, 00:18:52.147 "data_size": 7936 00:18:52.147 }, 00:18:52.147 { 00:18:52.147 "name": "BaseBdev2", 00:18:52.147 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:52.147 "is_configured": true, 00:18:52.147 "data_offset": 256, 00:18:52.147 "data_size": 7936 00:18:52.147 } 00:18:52.147 ] 00:18:52.147 }' 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.147 12:36:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.147 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.147 "name": "raid_bdev1", 00:18:52.147 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:52.147 "strip_size_kb": 0, 00:18:52.147 "state": "online", 00:18:52.147 "raid_level": "raid1", 00:18:52.147 "superblock": true, 00:18:52.147 "num_base_bdevs": 2, 00:18:52.147 "num_base_bdevs_discovered": 2, 00:18:52.147 "num_base_bdevs_operational": 2, 00:18:52.147 "base_bdevs_list": [ 00:18:52.147 { 00:18:52.147 "name": "spare", 00:18:52.147 "uuid": "dbe45825-bd28-5185-a88a-ecb17ece5df0", 00:18:52.147 "is_configured": true, 00:18:52.147 "data_offset": 256, 00:18:52.147 "data_size": 7936 00:18:52.147 }, 00:18:52.147 { 00:18:52.147 "name": "BaseBdev2", 00:18:52.147 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:52.147 "is_configured": true, 00:18:52.147 "data_offset": 256, 00:18:52.147 "data_size": 7936 00:18:52.147 } 00:18:52.147 ] 00:18:52.147 }' 00:18:52.147 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.147 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.716 [2024-09-30 12:36:04.436894] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.716 [2024-09-30 12:36:04.436923] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.716 [2024-09-30 12:36:04.436985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.716 [2024-09-30 12:36:04.437040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.716 [2024-09-30 12:36:04.437048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:52.716 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:52.976 /dev/nbd0 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:52.976 1+0 records in 00:18:52.976 1+0 records out 00:18:52.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047534 s, 8.6 MB/s 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:52.976 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:53.236 /dev/nbd1 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:53.236 1+0 records in 00:18:53.236 1+0 records out 00:18:53.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423808 s, 9.7 MB/s 00:18:53.236 12:36:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.236 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:53.236 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.236 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:53.236 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:53.236 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:53.236 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:53.236 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:53.496 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:53.496 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:53.496 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:53.496 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:53.496 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:53.496 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:53.496 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:53.496 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:53.496 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.757 [2024-09-30 12:36:05.637732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:53.757 [2024-09-30 12:36:05.637788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.757 [2024-09-30 12:36:05.637808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:53.757 [2024-09-30 12:36:05.637816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.757 [2024-09-30 12:36:05.639656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.757 [2024-09-30 12:36:05.639726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:53.757 [2024-09-30 12:36:05.639813] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:53.757 [2024-09-30 12:36:05.639891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:53.757 [2024-09-30 12:36:05.640054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:53.757 spare 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.757 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.017 [2024-09-30 12:36:05.739967] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:54.017 [2024-09-30 12:36:05.739994] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:54.017 [2024-09-30 12:36:05.740074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:54.017 [2024-09-30 12:36:05.740198] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:54.017 [2024-09-30 12:36:05.740206] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:54.017 [2024-09-30 12:36:05.740319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.017 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.017 "name": "raid_bdev1", 00:18:54.017 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:54.018 "strip_size_kb": 0, 00:18:54.018 "state": "online", 00:18:54.018 "raid_level": "raid1", 00:18:54.018 "superblock": true, 00:18:54.018 "num_base_bdevs": 2, 00:18:54.018 "num_base_bdevs_discovered": 2, 00:18:54.018 "num_base_bdevs_operational": 2, 00:18:54.018 "base_bdevs_list": [ 00:18:54.018 { 00:18:54.018 "name": "spare", 00:18:54.018 "uuid": "dbe45825-bd28-5185-a88a-ecb17ece5df0", 00:18:54.018 "is_configured": true, 00:18:54.018 "data_offset": 256, 00:18:54.018 "data_size": 7936 00:18:54.018 }, 00:18:54.018 { 00:18:54.018 "name": "BaseBdev2", 00:18:54.018 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:54.018 "is_configured": true, 00:18:54.018 "data_offset": 256, 00:18:54.018 "data_size": 7936 00:18:54.018 } 00:18:54.018 ] 00:18:54.018 }' 00:18:54.018 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.018 12:36:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.588 "name": "raid_bdev1", 00:18:54.588 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:54.588 "strip_size_kb": 0, 00:18:54.588 "state": "online", 00:18:54.588 "raid_level": "raid1", 00:18:54.588 "superblock": true, 00:18:54.588 "num_base_bdevs": 2, 00:18:54.588 "num_base_bdevs_discovered": 2, 00:18:54.588 "num_base_bdevs_operational": 2, 00:18:54.588 "base_bdevs_list": [ 00:18:54.588 { 00:18:54.588 "name": "spare", 00:18:54.588 "uuid": "dbe45825-bd28-5185-a88a-ecb17ece5df0", 00:18:54.588 "is_configured": true, 00:18:54.588 "data_offset": 256, 00:18:54.588 "data_size": 7936 00:18:54.588 }, 00:18:54.588 { 00:18:54.588 "name": "BaseBdev2", 00:18:54.588 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:54.588 "is_configured": true, 00:18:54.588 "data_offset": 256, 00:18:54.588 "data_size": 7936 00:18:54.588 } 00:18:54.588 ] 00:18:54.588 }' 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.588 [2024-09-30 12:36:06.400447] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.588 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.588 "name": "raid_bdev1", 00:18:54.588 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:54.588 "strip_size_kb": 0, 00:18:54.588 "state": "online", 00:18:54.589 "raid_level": "raid1", 00:18:54.589 "superblock": true, 00:18:54.589 "num_base_bdevs": 2, 00:18:54.589 "num_base_bdevs_discovered": 1, 00:18:54.589 "num_base_bdevs_operational": 1, 00:18:54.589 "base_bdevs_list": [ 00:18:54.589 { 00:18:54.589 "name": null, 00:18:54.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.589 "is_configured": false, 00:18:54.589 "data_offset": 0, 00:18:54.589 "data_size": 7936 00:18:54.589 }, 00:18:54.589 { 00:18:54.589 "name": "BaseBdev2", 00:18:54.589 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:54.589 "is_configured": true, 00:18:54.589 "data_offset": 256, 00:18:54.589 "data_size": 7936 00:18:54.589 } 00:18:54.589 ] 00:18:54.589 }' 00:18:54.589 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.589 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.158 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:55.158 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.158 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.158 [2024-09-30 12:36:06.815790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.158 [2024-09-30 12:36:06.815952] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:55.158 [2024-09-30 12:36:06.816011] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:55.159 [2024-09-30 12:36:06.816070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.159 [2024-09-30 12:36:06.829485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:55.159 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.159 12:36:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:55.159 [2024-09-30 12:36:06.831293] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.099 "name": "raid_bdev1", 00:18:56.099 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:56.099 "strip_size_kb": 0, 00:18:56.099 "state": "online", 00:18:56.099 "raid_level": "raid1", 00:18:56.099 "superblock": true, 00:18:56.099 "num_base_bdevs": 2, 00:18:56.099 "num_base_bdevs_discovered": 2, 00:18:56.099 "num_base_bdevs_operational": 2, 00:18:56.099 "process": { 00:18:56.099 "type": "rebuild", 00:18:56.099 "target": "spare", 00:18:56.099 "progress": { 00:18:56.099 "blocks": 2560, 00:18:56.099 "percent": 32 00:18:56.099 } 00:18:56.099 }, 00:18:56.099 "base_bdevs_list": [ 00:18:56.099 { 00:18:56.099 "name": "spare", 00:18:56.099 "uuid": "dbe45825-bd28-5185-a88a-ecb17ece5df0", 00:18:56.099 "is_configured": true, 00:18:56.099 "data_offset": 256, 00:18:56.099 "data_size": 7936 00:18:56.099 }, 00:18:56.099 { 00:18:56.099 "name": "BaseBdev2", 00:18:56.099 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:56.099 "is_configured": true, 00:18:56.099 "data_offset": 256, 00:18:56.099 "data_size": 7936 00:18:56.099 } 00:18:56.099 ] 00:18:56.099 }' 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.099 12:36:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.099 [2024-09-30 12:36:07.992082] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.359 [2024-09-30 12:36:08.035931] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:56.359 [2024-09-30 12:36:08.036003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.359 [2024-09-30 12:36:08.036017] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.359 [2024-09-30 12:36:08.036026] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.359 "name": "raid_bdev1", 00:18:56.359 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:56.359 "strip_size_kb": 0, 00:18:56.359 "state": "online", 00:18:56.359 "raid_level": "raid1", 00:18:56.359 "superblock": true, 00:18:56.359 "num_base_bdevs": 2, 00:18:56.359 "num_base_bdevs_discovered": 1, 00:18:56.359 "num_base_bdevs_operational": 1, 00:18:56.359 "base_bdevs_list": [ 00:18:56.359 { 00:18:56.359 "name": null, 00:18:56.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.359 "is_configured": false, 00:18:56.359 "data_offset": 0, 00:18:56.359 "data_size": 7936 00:18:56.359 }, 00:18:56.359 { 00:18:56.359 "name": "BaseBdev2", 00:18:56.359 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:56.359 "is_configured": true, 00:18:56.359 "data_offset": 256, 00:18:56.359 "data_size": 7936 00:18:56.359 } 00:18:56.359 ] 00:18:56.359 }' 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.359 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.619 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:56.619 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.619 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.619 [2024-09-30 12:36:08.489819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:56.619 [2024-09-30 12:36:08.489911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.619 [2024-09-30 12:36:08.489951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:56.619 [2024-09-30 12:36:08.489982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.619 [2024-09-30 12:36:08.490229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.619 [2024-09-30 12:36:08.490281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:56.619 [2024-09-30 12:36:08.490351] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:56.619 [2024-09-30 12:36:08.490393] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:56.619 [2024-09-30 12:36:08.490436] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:56.619 [2024-09-30 12:36:08.490521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.619 [2024-09-30 12:36:08.504005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:56.619 spare 00:18:56.619 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.619 [2024-09-30 12:36:08.505789] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:56.619 12:36:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.001 "name": "raid_bdev1", 00:18:58.001 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:58.001 "strip_size_kb": 0, 00:18:58.001 "state": "online", 00:18:58.001 "raid_level": "raid1", 00:18:58.001 "superblock": true, 00:18:58.001 "num_base_bdevs": 2, 00:18:58.001 "num_base_bdevs_discovered": 2, 00:18:58.001 "num_base_bdevs_operational": 2, 00:18:58.001 "process": { 00:18:58.001 "type": "rebuild", 00:18:58.001 "target": "spare", 00:18:58.001 "progress": { 00:18:58.001 "blocks": 2560, 00:18:58.001 "percent": 32 00:18:58.001 } 00:18:58.001 }, 00:18:58.001 "base_bdevs_list": [ 00:18:58.001 { 00:18:58.001 "name": "spare", 00:18:58.001 "uuid": "dbe45825-bd28-5185-a88a-ecb17ece5df0", 00:18:58.001 "is_configured": true, 00:18:58.001 "data_offset": 256, 00:18:58.001 "data_size": 7936 00:18:58.001 }, 00:18:58.001 { 00:18:58.001 "name": "BaseBdev2", 00:18:58.001 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:58.001 "is_configured": true, 00:18:58.001 "data_offset": 256, 00:18:58.001 "data_size": 7936 00:18:58.001 } 00:18:58.001 ] 00:18:58.001 }' 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.001 [2024-09-30 12:36:09.649833] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:58.001 [2024-09-30 12:36:09.710161] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:58.001 [2024-09-30 12:36:09.710259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.001 [2024-09-30 12:36:09.710293] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:58.001 [2024-09-30 12:36:09.710313] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.001 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.002 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.002 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.002 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.002 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.002 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.002 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.002 "name": "raid_bdev1", 00:18:58.002 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:58.002 "strip_size_kb": 0, 00:18:58.002 "state": "online", 00:18:58.002 "raid_level": "raid1", 00:18:58.002 "superblock": true, 00:18:58.002 "num_base_bdevs": 2, 00:18:58.002 "num_base_bdevs_discovered": 1, 00:18:58.002 "num_base_bdevs_operational": 1, 00:18:58.002 "base_bdevs_list": [ 00:18:58.002 { 00:18:58.002 "name": null, 00:18:58.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.002 "is_configured": false, 00:18:58.002 "data_offset": 0, 00:18:58.002 "data_size": 7936 00:18:58.002 }, 00:18:58.002 { 00:18:58.002 "name": "BaseBdev2", 00:18:58.002 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:58.002 "is_configured": true, 00:18:58.002 "data_offset": 256, 00:18:58.002 "data_size": 7936 00:18:58.002 } 00:18:58.002 ] 00:18:58.002 }' 00:18:58.002 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.002 12:36:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.572 "name": "raid_bdev1", 00:18:58.572 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:58.572 "strip_size_kb": 0, 00:18:58.572 "state": "online", 00:18:58.572 "raid_level": "raid1", 00:18:58.572 "superblock": true, 00:18:58.572 "num_base_bdevs": 2, 00:18:58.572 "num_base_bdevs_discovered": 1, 00:18:58.572 "num_base_bdevs_operational": 1, 00:18:58.572 "base_bdevs_list": [ 00:18:58.572 { 00:18:58.572 "name": null, 00:18:58.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.572 "is_configured": false, 00:18:58.572 "data_offset": 0, 00:18:58.572 "data_size": 7936 00:18:58.572 }, 00:18:58.572 { 00:18:58.572 "name": "BaseBdev2", 00:18:58.572 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:58.572 "is_configured": true, 00:18:58.572 "data_offset": 256, 00:18:58.572 "data_size": 7936 00:18:58.572 } 00:18:58.572 ] 00:18:58.572 }' 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.572 [2024-09-30 12:36:10.331792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:58.572 [2024-09-30 12:36:10.331835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.572 [2024-09-30 12:36:10.331857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:58.572 [2024-09-30 12:36:10.331864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.572 [2024-09-30 12:36:10.332049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.572 [2024-09-30 12:36:10.332060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:58.572 [2024-09-30 12:36:10.332099] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:58.572 [2024-09-30 12:36:10.332115] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:58.572 [2024-09-30 12:36:10.332123] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:58.572 [2024-09-30 12:36:10.332131] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:58.572 BaseBdev1 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.572 12:36:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.513 "name": "raid_bdev1", 00:18:59.513 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:18:59.513 "strip_size_kb": 0, 00:18:59.513 "state": "online", 00:18:59.513 "raid_level": "raid1", 00:18:59.513 "superblock": true, 00:18:59.513 "num_base_bdevs": 2, 00:18:59.513 "num_base_bdevs_discovered": 1, 00:18:59.513 "num_base_bdevs_operational": 1, 00:18:59.513 "base_bdevs_list": [ 00:18:59.513 { 00:18:59.513 "name": null, 00:18:59.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.513 "is_configured": false, 00:18:59.513 "data_offset": 0, 00:18:59.513 "data_size": 7936 00:18:59.513 }, 00:18:59.513 { 00:18:59.513 "name": "BaseBdev2", 00:18:59.513 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:18:59.513 "is_configured": true, 00:18:59.513 "data_offset": 256, 00:18:59.513 "data_size": 7936 00:18:59.513 } 00:18:59.513 ] 00:18:59.513 }' 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.513 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.082 "name": "raid_bdev1", 00:19:00.082 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:19:00.082 "strip_size_kb": 0, 00:19:00.082 "state": "online", 00:19:00.082 "raid_level": "raid1", 00:19:00.082 "superblock": true, 00:19:00.082 "num_base_bdevs": 2, 00:19:00.082 "num_base_bdevs_discovered": 1, 00:19:00.082 "num_base_bdevs_operational": 1, 00:19:00.082 "base_bdevs_list": [ 00:19:00.082 { 00:19:00.082 "name": null, 00:19:00.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.082 "is_configured": false, 00:19:00.082 "data_offset": 0, 00:19:00.082 "data_size": 7936 00:19:00.082 }, 00:19:00.082 { 00:19:00.082 "name": "BaseBdev2", 00:19:00.082 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:19:00.082 "is_configured": true, 00:19:00.082 "data_offset": 256, 00:19:00.082 "data_size": 7936 00:19:00.082 } 00:19:00.082 ] 00:19:00.082 }' 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.082 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.083 [2024-09-30 12:36:11.889297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.083 [2024-09-30 12:36:11.889487] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:00.083 [2024-09-30 12:36:11.889506] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:00.083 request: 00:19:00.083 { 00:19:00.083 "base_bdev": "BaseBdev1", 00:19:00.083 "raid_bdev": "raid_bdev1", 00:19:00.083 "method": "bdev_raid_add_base_bdev", 00:19:00.083 "req_id": 1 00:19:00.083 } 00:19:00.083 Got JSON-RPC error response 00:19:00.083 response: 00:19:00.083 { 00:19:00.083 "code": -22, 00:19:00.083 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:00.083 } 00:19:00.083 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:00.083 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:19:00.083 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:00.083 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:00.083 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:00.083 12:36:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.022 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.282 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.282 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.282 "name": "raid_bdev1", 00:19:01.282 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:19:01.282 "strip_size_kb": 0, 00:19:01.282 "state": "online", 00:19:01.282 "raid_level": "raid1", 00:19:01.282 "superblock": true, 00:19:01.282 "num_base_bdevs": 2, 00:19:01.282 "num_base_bdevs_discovered": 1, 00:19:01.282 "num_base_bdevs_operational": 1, 00:19:01.282 "base_bdevs_list": [ 00:19:01.282 { 00:19:01.282 "name": null, 00:19:01.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.282 "is_configured": false, 00:19:01.282 "data_offset": 0, 00:19:01.282 "data_size": 7936 00:19:01.282 }, 00:19:01.282 { 00:19:01.282 "name": "BaseBdev2", 00:19:01.282 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:19:01.282 "is_configured": true, 00:19:01.282 "data_offset": 256, 00:19:01.282 "data_size": 7936 00:19:01.282 } 00:19:01.282 ] 00:19:01.282 }' 00:19:01.282 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.282 12:36:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.542 "name": "raid_bdev1", 00:19:01.542 "uuid": "11cff520-6a3b-439c-9ce4-8fd792a3e248", 00:19:01.542 "strip_size_kb": 0, 00:19:01.542 "state": "online", 00:19:01.542 "raid_level": "raid1", 00:19:01.542 "superblock": true, 00:19:01.542 "num_base_bdevs": 2, 00:19:01.542 "num_base_bdevs_discovered": 1, 00:19:01.542 "num_base_bdevs_operational": 1, 00:19:01.542 "base_bdevs_list": [ 00:19:01.542 { 00:19:01.542 "name": null, 00:19:01.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.542 "is_configured": false, 00:19:01.542 "data_offset": 0, 00:19:01.542 "data_size": 7936 00:19:01.542 }, 00:19:01.542 { 00:19:01.542 "name": "BaseBdev2", 00:19:01.542 "uuid": "ef2c0736-fff4-5285-b85e-26ecf97359a6", 00:19:01.542 "is_configured": true, 00:19:01.542 "data_offset": 256, 00:19:01.542 "data_size": 7936 00:19:01.542 } 00:19:01.542 ] 00:19:01.542 }' 00:19:01.542 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87627 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87627 ']' 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87627 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87627 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:01.802 killing process with pid 87627 00:19:01.802 Received shutdown signal, test time was about 60.000000 seconds 00:19:01.802 00:19:01.802 Latency(us) 00:19:01.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.802 =================================================================================================================== 00:19:01.802 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87627' 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87627 00:19:01.802 [2024-09-30 12:36:13.540301] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.802 [2024-09-30 12:36:13.540403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.802 [2024-09-30 12:36:13.540442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.802 [2024-09-30 12:36:13.540452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:01.802 12:36:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87627 00:19:02.061 [2024-09-30 12:36:13.840755] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:03.443 ************************************ 00:19:03.443 END TEST raid_rebuild_test_sb_md_separate 00:19:03.443 ************************************ 00:19:03.443 12:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:03.443 00:19:03.443 real 0m19.855s 00:19:03.443 user 0m25.793s 00:19:03.443 sys 0m2.775s 00:19:03.443 12:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.443 12:36:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.443 12:36:15 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:03.443 12:36:15 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:03.443 12:36:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:03.443 12:36:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.443 12:36:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.443 ************************************ 00:19:03.443 START TEST raid_state_function_test_sb_md_interleaved 00:19:03.443 ************************************ 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88318 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88318' 00:19:03.443 Process raid pid: 88318 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88318 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88318 ']' 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:03.443 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.443 [2024-09-30 12:36:15.178638] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:03.443 [2024-09-30 12:36:15.178813] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.705 [2024-09-30 12:36:15.341816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.705 [2024-09-30 12:36:15.537081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.963 [2024-09-30 12:36:15.712686] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.963 [2024-09-30 12:36:15.712807] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.224 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:04.224 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:19:04.224 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:04.224 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.224 12:36:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.224 [2024-09-30 12:36:15.997360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:04.224 [2024-09-30 12:36:15.997458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:04.224 [2024-09-30 12:36:15.997488] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:04.224 [2024-09-30 12:36:15.997497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.224 "name": "Existed_Raid", 00:19:04.224 "uuid": "c03be395-9988-43b8-a08f-028839779c45", 00:19:04.224 "strip_size_kb": 0, 00:19:04.224 "state": "configuring", 00:19:04.224 "raid_level": "raid1", 00:19:04.224 "superblock": true, 00:19:04.224 "num_base_bdevs": 2, 00:19:04.224 "num_base_bdevs_discovered": 0, 00:19:04.224 "num_base_bdevs_operational": 2, 00:19:04.224 "base_bdevs_list": [ 00:19:04.224 { 00:19:04.224 "name": "BaseBdev1", 00:19:04.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.224 "is_configured": false, 00:19:04.224 "data_offset": 0, 00:19:04.224 "data_size": 0 00:19:04.224 }, 00:19:04.224 { 00:19:04.224 "name": "BaseBdev2", 00:19:04.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.224 "is_configured": false, 00:19:04.224 "data_offset": 0, 00:19:04.224 "data_size": 0 00:19:04.224 } 00:19:04.224 ] 00:19:04.224 }' 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.224 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.794 [2024-09-30 12:36:16.388595] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:04.794 [2024-09-30 12:36:16.388668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.794 [2024-09-30 12:36:16.400597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:04.794 [2024-09-30 12:36:16.400636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:04.794 [2024-09-30 12:36:16.400644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:04.794 [2024-09-30 12:36:16.400671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.794 [2024-09-30 12:36:16.479938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:04.794 BaseBdev1 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.794 [ 00:19:04.794 { 00:19:04.794 "name": "BaseBdev1", 00:19:04.794 "aliases": [ 00:19:04.794 "813109f5-4406-4843-af05-8e08e7bb5dac" 00:19:04.794 ], 00:19:04.794 "product_name": "Malloc disk", 00:19:04.794 "block_size": 4128, 00:19:04.794 "num_blocks": 8192, 00:19:04.794 "uuid": "813109f5-4406-4843-af05-8e08e7bb5dac", 00:19:04.794 "md_size": 32, 00:19:04.794 "md_interleave": true, 00:19:04.794 "dif_type": 0, 00:19:04.794 "assigned_rate_limits": { 00:19:04.794 "rw_ios_per_sec": 0, 00:19:04.794 "rw_mbytes_per_sec": 0, 00:19:04.794 "r_mbytes_per_sec": 0, 00:19:04.794 "w_mbytes_per_sec": 0 00:19:04.794 }, 00:19:04.794 "claimed": true, 00:19:04.794 "claim_type": "exclusive_write", 00:19:04.794 "zoned": false, 00:19:04.794 "supported_io_types": { 00:19:04.794 "read": true, 00:19:04.794 "write": true, 00:19:04.794 "unmap": true, 00:19:04.794 "flush": true, 00:19:04.794 "reset": true, 00:19:04.794 "nvme_admin": false, 00:19:04.794 "nvme_io": false, 00:19:04.794 "nvme_io_md": false, 00:19:04.794 "write_zeroes": true, 00:19:04.794 "zcopy": true, 00:19:04.794 "get_zone_info": false, 00:19:04.794 "zone_management": false, 00:19:04.794 "zone_append": false, 00:19:04.794 "compare": false, 00:19:04.794 "compare_and_write": false, 00:19:04.794 "abort": true, 00:19:04.794 "seek_hole": false, 00:19:04.794 "seek_data": false, 00:19:04.794 "copy": true, 00:19:04.794 "nvme_iov_md": false 00:19:04.794 }, 00:19:04.794 "memory_domains": [ 00:19:04.794 { 00:19:04.794 "dma_device_id": "system", 00:19:04.794 "dma_device_type": 1 00:19:04.794 }, 00:19:04.794 { 00:19:04.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.794 "dma_device_type": 2 00:19:04.794 } 00:19:04.794 ], 00:19:04.794 "driver_specific": {} 00:19:04.794 } 00:19:04.794 ] 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.794 "name": "Existed_Raid", 00:19:04.794 "uuid": "d9f290b3-ce96-4d14-a67f-23fae897f585", 00:19:04.794 "strip_size_kb": 0, 00:19:04.794 "state": "configuring", 00:19:04.794 "raid_level": "raid1", 00:19:04.794 "superblock": true, 00:19:04.794 "num_base_bdevs": 2, 00:19:04.794 "num_base_bdevs_discovered": 1, 00:19:04.794 "num_base_bdevs_operational": 2, 00:19:04.794 "base_bdevs_list": [ 00:19:04.794 { 00:19:04.794 "name": "BaseBdev1", 00:19:04.794 "uuid": "813109f5-4406-4843-af05-8e08e7bb5dac", 00:19:04.794 "is_configured": true, 00:19:04.794 "data_offset": 256, 00:19:04.794 "data_size": 7936 00:19:04.794 }, 00:19:04.794 { 00:19:04.794 "name": "BaseBdev2", 00:19:04.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.794 "is_configured": false, 00:19:04.794 "data_offset": 0, 00:19:04.794 "data_size": 0 00:19:04.794 } 00:19:04.794 ] 00:19:04.794 }' 00:19:04.794 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.795 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.055 [2024-09-30 12:36:16.919352] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:05.055 [2024-09-30 12:36:16.919432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.055 [2024-09-30 12:36:16.931378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.055 [2024-09-30 12:36:16.933185] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.055 [2024-09-30 12:36:16.933272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.055 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.315 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.315 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.315 "name": "Existed_Raid", 00:19:05.315 "uuid": "3a74261f-57e8-4574-9f7b-e37ee08f40bc", 00:19:05.315 "strip_size_kb": 0, 00:19:05.315 "state": "configuring", 00:19:05.315 "raid_level": "raid1", 00:19:05.315 "superblock": true, 00:19:05.315 "num_base_bdevs": 2, 00:19:05.315 "num_base_bdevs_discovered": 1, 00:19:05.315 "num_base_bdevs_operational": 2, 00:19:05.315 "base_bdevs_list": [ 00:19:05.315 { 00:19:05.315 "name": "BaseBdev1", 00:19:05.315 "uuid": "813109f5-4406-4843-af05-8e08e7bb5dac", 00:19:05.315 "is_configured": true, 00:19:05.315 "data_offset": 256, 00:19:05.315 "data_size": 7936 00:19:05.315 }, 00:19:05.315 { 00:19:05.315 "name": "BaseBdev2", 00:19:05.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.315 "is_configured": false, 00:19:05.315 "data_offset": 0, 00:19:05.315 "data_size": 0 00:19:05.315 } 00:19:05.315 ] 00:19:05.315 }' 00:19:05.315 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.315 12:36:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.575 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:05.575 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.575 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.576 [2024-09-30 12:36:17.412141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.576 [2024-09-30 12:36:17.412405] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:05.576 [2024-09-30 12:36:17.412442] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:05.576 [2024-09-30 12:36:17.412603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:05.576 [2024-09-30 12:36:17.412715] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:05.576 [2024-09-30 12:36:17.412775] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:05.576 [2024-09-30 12:36:17.412898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.576 BaseBdev2 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.576 [ 00:19:05.576 { 00:19:05.576 "name": "BaseBdev2", 00:19:05.576 "aliases": [ 00:19:05.576 "bdfc5bb0-1c10-47a0-b855-3ab8fc85ae03" 00:19:05.576 ], 00:19:05.576 "product_name": "Malloc disk", 00:19:05.576 "block_size": 4128, 00:19:05.576 "num_blocks": 8192, 00:19:05.576 "uuid": "bdfc5bb0-1c10-47a0-b855-3ab8fc85ae03", 00:19:05.576 "md_size": 32, 00:19:05.576 "md_interleave": true, 00:19:05.576 "dif_type": 0, 00:19:05.576 "assigned_rate_limits": { 00:19:05.576 "rw_ios_per_sec": 0, 00:19:05.576 "rw_mbytes_per_sec": 0, 00:19:05.576 "r_mbytes_per_sec": 0, 00:19:05.576 "w_mbytes_per_sec": 0 00:19:05.576 }, 00:19:05.576 "claimed": true, 00:19:05.576 "claim_type": "exclusive_write", 00:19:05.576 "zoned": false, 00:19:05.576 "supported_io_types": { 00:19:05.576 "read": true, 00:19:05.576 "write": true, 00:19:05.576 "unmap": true, 00:19:05.576 "flush": true, 00:19:05.576 "reset": true, 00:19:05.576 "nvme_admin": false, 00:19:05.576 "nvme_io": false, 00:19:05.576 "nvme_io_md": false, 00:19:05.576 "write_zeroes": true, 00:19:05.576 "zcopy": true, 00:19:05.576 "get_zone_info": false, 00:19:05.576 "zone_management": false, 00:19:05.576 "zone_append": false, 00:19:05.576 "compare": false, 00:19:05.576 "compare_and_write": false, 00:19:05.576 "abort": true, 00:19:05.576 "seek_hole": false, 00:19:05.576 "seek_data": false, 00:19:05.576 "copy": true, 00:19:05.576 "nvme_iov_md": false 00:19:05.576 }, 00:19:05.576 "memory_domains": [ 00:19:05.576 { 00:19:05.576 "dma_device_id": "system", 00:19:05.576 "dma_device_type": 1 00:19:05.576 }, 00:19:05.576 { 00:19:05.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.576 "dma_device_type": 2 00:19:05.576 } 00:19:05.576 ], 00:19:05.576 "driver_specific": {} 00:19:05.576 } 00:19:05.576 ] 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.576 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.836 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.836 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.836 "name": "Existed_Raid", 00:19:05.836 "uuid": "3a74261f-57e8-4574-9f7b-e37ee08f40bc", 00:19:05.836 "strip_size_kb": 0, 00:19:05.836 "state": "online", 00:19:05.836 "raid_level": "raid1", 00:19:05.836 "superblock": true, 00:19:05.836 "num_base_bdevs": 2, 00:19:05.836 "num_base_bdevs_discovered": 2, 00:19:05.836 "num_base_bdevs_operational": 2, 00:19:05.836 "base_bdevs_list": [ 00:19:05.836 { 00:19:05.836 "name": "BaseBdev1", 00:19:05.836 "uuid": "813109f5-4406-4843-af05-8e08e7bb5dac", 00:19:05.836 "is_configured": true, 00:19:05.836 "data_offset": 256, 00:19:05.836 "data_size": 7936 00:19:05.836 }, 00:19:05.836 { 00:19:05.836 "name": "BaseBdev2", 00:19:05.836 "uuid": "bdfc5bb0-1c10-47a0-b855-3ab8fc85ae03", 00:19:05.836 "is_configured": true, 00:19:05.836 "data_offset": 256, 00:19:05.836 "data_size": 7936 00:19:05.836 } 00:19:05.836 ] 00:19:05.836 }' 00:19:05.836 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.836 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.096 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:06.096 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:06.096 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:06.096 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:06.096 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:06.096 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:06.096 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:06.096 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:06.097 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.097 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.097 [2024-09-30 12:36:17.907924] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.097 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.097 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.097 "name": "Existed_Raid", 00:19:06.097 "aliases": [ 00:19:06.097 "3a74261f-57e8-4574-9f7b-e37ee08f40bc" 00:19:06.097 ], 00:19:06.097 "product_name": "Raid Volume", 00:19:06.097 "block_size": 4128, 00:19:06.097 "num_blocks": 7936, 00:19:06.097 "uuid": "3a74261f-57e8-4574-9f7b-e37ee08f40bc", 00:19:06.097 "md_size": 32, 00:19:06.097 "md_interleave": true, 00:19:06.097 "dif_type": 0, 00:19:06.097 "assigned_rate_limits": { 00:19:06.097 "rw_ios_per_sec": 0, 00:19:06.097 "rw_mbytes_per_sec": 0, 00:19:06.097 "r_mbytes_per_sec": 0, 00:19:06.097 "w_mbytes_per_sec": 0 00:19:06.097 }, 00:19:06.097 "claimed": false, 00:19:06.097 "zoned": false, 00:19:06.097 "supported_io_types": { 00:19:06.097 "read": true, 00:19:06.097 "write": true, 00:19:06.097 "unmap": false, 00:19:06.097 "flush": false, 00:19:06.097 "reset": true, 00:19:06.097 "nvme_admin": false, 00:19:06.097 "nvme_io": false, 00:19:06.097 "nvme_io_md": false, 00:19:06.097 "write_zeroes": true, 00:19:06.097 "zcopy": false, 00:19:06.097 "get_zone_info": false, 00:19:06.097 "zone_management": false, 00:19:06.097 "zone_append": false, 00:19:06.097 "compare": false, 00:19:06.097 "compare_and_write": false, 00:19:06.097 "abort": false, 00:19:06.097 "seek_hole": false, 00:19:06.097 "seek_data": false, 00:19:06.097 "copy": false, 00:19:06.097 "nvme_iov_md": false 00:19:06.097 }, 00:19:06.097 "memory_domains": [ 00:19:06.097 { 00:19:06.097 "dma_device_id": "system", 00:19:06.097 "dma_device_type": 1 00:19:06.097 }, 00:19:06.097 { 00:19:06.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.097 "dma_device_type": 2 00:19:06.097 }, 00:19:06.097 { 00:19:06.097 "dma_device_id": "system", 00:19:06.097 "dma_device_type": 1 00:19:06.097 }, 00:19:06.097 { 00:19:06.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.097 "dma_device_type": 2 00:19:06.097 } 00:19:06.097 ], 00:19:06.097 "driver_specific": { 00:19:06.097 "raid": { 00:19:06.097 "uuid": "3a74261f-57e8-4574-9f7b-e37ee08f40bc", 00:19:06.097 "strip_size_kb": 0, 00:19:06.097 "state": "online", 00:19:06.097 "raid_level": "raid1", 00:19:06.097 "superblock": true, 00:19:06.097 "num_base_bdevs": 2, 00:19:06.097 "num_base_bdevs_discovered": 2, 00:19:06.097 "num_base_bdevs_operational": 2, 00:19:06.097 "base_bdevs_list": [ 00:19:06.097 { 00:19:06.097 "name": "BaseBdev1", 00:19:06.097 "uuid": "813109f5-4406-4843-af05-8e08e7bb5dac", 00:19:06.097 "is_configured": true, 00:19:06.097 "data_offset": 256, 00:19:06.097 "data_size": 7936 00:19:06.097 }, 00:19:06.097 { 00:19:06.097 "name": "BaseBdev2", 00:19:06.097 "uuid": "bdfc5bb0-1c10-47a0-b855-3ab8fc85ae03", 00:19:06.097 "is_configured": true, 00:19:06.097 "data_offset": 256, 00:19:06.097 "data_size": 7936 00:19:06.097 } 00:19:06.097 ] 00:19:06.097 } 00:19:06.097 } 00:19:06.097 }' 00:19:06.097 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.357 12:36:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:06.357 BaseBdev2' 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.357 [2024-09-30 12:36:18.159251] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.357 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.617 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.617 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.617 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.617 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.617 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.617 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.617 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.617 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.617 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.617 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.617 "name": "Existed_Raid", 00:19:06.617 "uuid": "3a74261f-57e8-4574-9f7b-e37ee08f40bc", 00:19:06.617 "strip_size_kb": 0, 00:19:06.618 "state": "online", 00:19:06.618 "raid_level": "raid1", 00:19:06.618 "superblock": true, 00:19:06.618 "num_base_bdevs": 2, 00:19:06.618 "num_base_bdevs_discovered": 1, 00:19:06.618 "num_base_bdevs_operational": 1, 00:19:06.618 "base_bdevs_list": [ 00:19:06.618 { 00:19:06.618 "name": null, 00:19:06.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.618 "is_configured": false, 00:19:06.618 "data_offset": 0, 00:19:06.618 "data_size": 7936 00:19:06.618 }, 00:19:06.618 { 00:19:06.618 "name": "BaseBdev2", 00:19:06.618 "uuid": "bdfc5bb0-1c10-47a0-b855-3ab8fc85ae03", 00:19:06.618 "is_configured": true, 00:19:06.618 "data_offset": 256, 00:19:06.618 "data_size": 7936 00:19:06.618 } 00:19:06.618 ] 00:19:06.618 }' 00:19:06.618 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.618 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.877 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.877 [2024-09-30 12:36:18.747386] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:06.877 [2024-09-30 12:36:18.747530] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.136 [2024-09-30 12:36:18.837606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.136 [2024-09-30 12:36:18.837708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.136 [2024-09-30 12:36:18.837764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:07.136 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.136 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:07.136 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:07.136 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88318 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88318 ']' 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88318 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88318 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88318' 00:19:07.137 killing process with pid 88318 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88318 00:19:07.137 [2024-09-30 12:36:18.924490] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:07.137 12:36:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88318 00:19:07.137 [2024-09-30 12:36:18.939945] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:08.517 12:36:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:08.517 00:19:08.517 real 0m5.034s 00:19:08.517 user 0m7.157s 00:19:08.517 sys 0m0.858s 00:19:08.518 12:36:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:08.518 12:36:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.518 ************************************ 00:19:08.518 END TEST raid_state_function_test_sb_md_interleaved 00:19:08.518 ************************************ 00:19:08.518 12:36:20 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:08.518 12:36:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:08.518 12:36:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:08.518 12:36:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.518 ************************************ 00:19:08.518 START TEST raid_superblock_test_md_interleaved 00:19:08.518 ************************************ 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88565 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88565 00:19:08.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88565 ']' 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.518 12:36:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.518 [2024-09-30 12:36:20.286250] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:08.518 [2024-09-30 12:36:20.286440] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88565 ] 00:19:08.778 [2024-09-30 12:36:20.449528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.778 [2024-09-30 12:36:20.643967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.038 [2024-09-30 12:36:20.828324] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.038 [2024-09-30 12:36:20.828375] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.299 malloc1 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.299 [2024-09-30 12:36:21.158321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:09.299 [2024-09-30 12:36:21.158412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.299 [2024-09-30 12:36:21.158452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:09.299 [2024-09-30 12:36:21.158480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.299 [2024-09-30 12:36:21.160243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.299 [2024-09-30 12:36:21.160314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:09.299 pt1 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.299 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.563 malloc2 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.563 [2024-09-30 12:36:21.248785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:09.563 [2024-09-30 12:36:21.248875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.563 [2024-09-30 12:36:21.248914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:09.563 [2024-09-30 12:36:21.248942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.563 [2024-09-30 12:36:21.250676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.563 [2024-09-30 12:36:21.250751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:09.563 pt2 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.563 [2024-09-30 12:36:21.260829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:09.563 [2024-09-30 12:36:21.262504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:09.563 [2024-09-30 12:36:21.262680] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:09.563 [2024-09-30 12:36:21.262695] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:09.563 [2024-09-30 12:36:21.262774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:09.563 [2024-09-30 12:36:21.262836] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:09.563 [2024-09-30 12:36:21.262849] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:09.563 [2024-09-30 12:36:21.262917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.563 "name": "raid_bdev1", 00:19:09.563 "uuid": "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f", 00:19:09.563 "strip_size_kb": 0, 00:19:09.563 "state": "online", 00:19:09.563 "raid_level": "raid1", 00:19:09.563 "superblock": true, 00:19:09.563 "num_base_bdevs": 2, 00:19:09.563 "num_base_bdevs_discovered": 2, 00:19:09.563 "num_base_bdevs_operational": 2, 00:19:09.563 "base_bdevs_list": [ 00:19:09.563 { 00:19:09.563 "name": "pt1", 00:19:09.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:09.563 "is_configured": true, 00:19:09.563 "data_offset": 256, 00:19:09.563 "data_size": 7936 00:19:09.563 }, 00:19:09.563 { 00:19:09.563 "name": "pt2", 00:19:09.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.563 "is_configured": true, 00:19:09.563 "data_offset": 256, 00:19:09.563 "data_size": 7936 00:19:09.563 } 00:19:09.563 ] 00:19:09.563 }' 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.563 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.190 [2024-09-30 12:36:21.772113] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:10.190 "name": "raid_bdev1", 00:19:10.190 "aliases": [ 00:19:10.190 "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f" 00:19:10.190 ], 00:19:10.190 "product_name": "Raid Volume", 00:19:10.190 "block_size": 4128, 00:19:10.190 "num_blocks": 7936, 00:19:10.190 "uuid": "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f", 00:19:10.190 "md_size": 32, 00:19:10.190 "md_interleave": true, 00:19:10.190 "dif_type": 0, 00:19:10.190 "assigned_rate_limits": { 00:19:10.190 "rw_ios_per_sec": 0, 00:19:10.190 "rw_mbytes_per_sec": 0, 00:19:10.190 "r_mbytes_per_sec": 0, 00:19:10.190 "w_mbytes_per_sec": 0 00:19:10.190 }, 00:19:10.190 "claimed": false, 00:19:10.190 "zoned": false, 00:19:10.190 "supported_io_types": { 00:19:10.190 "read": true, 00:19:10.190 "write": true, 00:19:10.190 "unmap": false, 00:19:10.190 "flush": false, 00:19:10.190 "reset": true, 00:19:10.190 "nvme_admin": false, 00:19:10.190 "nvme_io": false, 00:19:10.190 "nvme_io_md": false, 00:19:10.190 "write_zeroes": true, 00:19:10.190 "zcopy": false, 00:19:10.190 "get_zone_info": false, 00:19:10.190 "zone_management": false, 00:19:10.190 "zone_append": false, 00:19:10.190 "compare": false, 00:19:10.190 "compare_and_write": false, 00:19:10.190 "abort": false, 00:19:10.190 "seek_hole": false, 00:19:10.190 "seek_data": false, 00:19:10.190 "copy": false, 00:19:10.190 "nvme_iov_md": false 00:19:10.190 }, 00:19:10.190 "memory_domains": [ 00:19:10.190 { 00:19:10.190 "dma_device_id": "system", 00:19:10.190 "dma_device_type": 1 00:19:10.190 }, 00:19:10.190 { 00:19:10.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.190 "dma_device_type": 2 00:19:10.190 }, 00:19:10.190 { 00:19:10.190 "dma_device_id": "system", 00:19:10.190 "dma_device_type": 1 00:19:10.190 }, 00:19:10.190 { 00:19:10.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.190 "dma_device_type": 2 00:19:10.190 } 00:19:10.190 ], 00:19:10.190 "driver_specific": { 00:19:10.190 "raid": { 00:19:10.190 "uuid": "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f", 00:19:10.190 "strip_size_kb": 0, 00:19:10.190 "state": "online", 00:19:10.190 "raid_level": "raid1", 00:19:10.190 "superblock": true, 00:19:10.190 "num_base_bdevs": 2, 00:19:10.190 "num_base_bdevs_discovered": 2, 00:19:10.190 "num_base_bdevs_operational": 2, 00:19:10.190 "base_bdevs_list": [ 00:19:10.190 { 00:19:10.190 "name": "pt1", 00:19:10.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:10.190 "is_configured": true, 00:19:10.190 "data_offset": 256, 00:19:10.190 "data_size": 7936 00:19:10.190 }, 00:19:10.190 { 00:19:10.190 "name": "pt2", 00:19:10.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.190 "is_configured": true, 00:19:10.190 "data_offset": 256, 00:19:10.190 "data_size": 7936 00:19:10.190 } 00:19:10.190 ] 00:19:10.190 } 00:19:10.190 } 00:19:10.190 }' 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:10.190 pt2' 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.190 12:36:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.190 [2024-09-30 12:36:21.995912] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.190 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.190 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f 00:19:10.190 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f ']' 00:19:10.190 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:10.190 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.190 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.190 [2024-09-30 12:36:22.043567] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.190 [2024-09-30 12:36:22.043588] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.190 [2024-09-30 12:36:22.043660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.190 [2024-09-30 12:36:22.043709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.191 [2024-09-30 12:36:22.043720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:10.191 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.191 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.191 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.191 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.191 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:10.191 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.461 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:10.461 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:10.461 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:10.461 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.462 [2024-09-30 12:36:22.191340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:10.462 [2024-09-30 12:36:22.193017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:10.462 [2024-09-30 12:36:22.193079] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:10.462 [2024-09-30 12:36:22.193125] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:10.462 [2024-09-30 12:36:22.193139] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.462 [2024-09-30 12:36:22.193147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:10.462 request: 00:19:10.462 { 00:19:10.462 "name": "raid_bdev1", 00:19:10.462 "raid_level": "raid1", 00:19:10.462 "base_bdevs": [ 00:19:10.462 "malloc1", 00:19:10.462 "malloc2" 00:19:10.462 ], 00:19:10.462 "superblock": false, 00:19:10.462 "method": "bdev_raid_create", 00:19:10.462 "req_id": 1 00:19:10.462 } 00:19:10.462 Got JSON-RPC error response 00:19:10.462 response: 00:19:10.462 { 00:19:10.462 "code": -17, 00:19:10.462 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:10.462 } 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.462 [2024-09-30 12:36:22.259188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:10.462 [2024-09-30 12:36:22.259235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.462 [2024-09-30 12:36:22.259247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:10.462 [2024-09-30 12:36:22.259256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.462 [2024-09-30 12:36:22.261002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.462 [2024-09-30 12:36:22.261088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:10.462 [2024-09-30 12:36:22.261131] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:10.462 [2024-09-30 12:36:22.261191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:10.462 pt1 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.462 "name": "raid_bdev1", 00:19:10.462 "uuid": "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f", 00:19:10.462 "strip_size_kb": 0, 00:19:10.462 "state": "configuring", 00:19:10.462 "raid_level": "raid1", 00:19:10.462 "superblock": true, 00:19:10.462 "num_base_bdevs": 2, 00:19:10.462 "num_base_bdevs_discovered": 1, 00:19:10.462 "num_base_bdevs_operational": 2, 00:19:10.462 "base_bdevs_list": [ 00:19:10.462 { 00:19:10.462 "name": "pt1", 00:19:10.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:10.462 "is_configured": true, 00:19:10.462 "data_offset": 256, 00:19:10.462 "data_size": 7936 00:19:10.462 }, 00:19:10.462 { 00:19:10.462 "name": null, 00:19:10.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.462 "is_configured": false, 00:19:10.462 "data_offset": 256, 00:19:10.462 "data_size": 7936 00:19:10.462 } 00:19:10.462 ] 00:19:10.462 }' 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.462 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.031 [2024-09-30 12:36:22.726406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:11.031 [2024-09-30 12:36:22.726495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.031 [2024-09-30 12:36:22.726528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:11.031 [2024-09-30 12:36:22.726558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.031 [2024-09-30 12:36:22.726674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.031 [2024-09-30 12:36:22.726765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:11.031 [2024-09-30 12:36:22.726834] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:11.031 [2024-09-30 12:36:22.726887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:11.031 [2024-09-30 12:36:22.726993] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:11.031 [2024-09-30 12:36:22.727030] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:11.031 [2024-09-30 12:36:22.727108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:11.031 [2024-09-30 12:36:22.727192] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:11.031 [2024-09-30 12:36:22.727224] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:11.031 [2024-09-30 12:36:22.727306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.031 pt2 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.031 "name": "raid_bdev1", 00:19:11.031 "uuid": "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f", 00:19:11.031 "strip_size_kb": 0, 00:19:11.031 "state": "online", 00:19:11.031 "raid_level": "raid1", 00:19:11.031 "superblock": true, 00:19:11.031 "num_base_bdevs": 2, 00:19:11.031 "num_base_bdevs_discovered": 2, 00:19:11.031 "num_base_bdevs_operational": 2, 00:19:11.031 "base_bdevs_list": [ 00:19:11.031 { 00:19:11.031 "name": "pt1", 00:19:11.031 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.031 "is_configured": true, 00:19:11.031 "data_offset": 256, 00:19:11.031 "data_size": 7936 00:19:11.031 }, 00:19:11.031 { 00:19:11.031 "name": "pt2", 00:19:11.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.031 "is_configured": true, 00:19:11.031 "data_offset": 256, 00:19:11.031 "data_size": 7936 00:19:11.031 } 00:19:11.031 ] 00:19:11.031 }' 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.031 12:36:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.291 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:11.291 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:11.291 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:11.291 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:11.291 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:11.291 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:11.291 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:11.291 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.291 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.291 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.291 [2024-09-30 12:36:23.173878] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.552 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.552 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:11.552 "name": "raid_bdev1", 00:19:11.552 "aliases": [ 00:19:11.552 "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f" 00:19:11.552 ], 00:19:11.552 "product_name": "Raid Volume", 00:19:11.552 "block_size": 4128, 00:19:11.552 "num_blocks": 7936, 00:19:11.552 "uuid": "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f", 00:19:11.552 "md_size": 32, 00:19:11.552 "md_interleave": true, 00:19:11.552 "dif_type": 0, 00:19:11.552 "assigned_rate_limits": { 00:19:11.552 "rw_ios_per_sec": 0, 00:19:11.552 "rw_mbytes_per_sec": 0, 00:19:11.552 "r_mbytes_per_sec": 0, 00:19:11.552 "w_mbytes_per_sec": 0 00:19:11.552 }, 00:19:11.552 "claimed": false, 00:19:11.552 "zoned": false, 00:19:11.552 "supported_io_types": { 00:19:11.552 "read": true, 00:19:11.552 "write": true, 00:19:11.552 "unmap": false, 00:19:11.552 "flush": false, 00:19:11.552 "reset": true, 00:19:11.552 "nvme_admin": false, 00:19:11.552 "nvme_io": false, 00:19:11.552 "nvme_io_md": false, 00:19:11.552 "write_zeroes": true, 00:19:11.552 "zcopy": false, 00:19:11.552 "get_zone_info": false, 00:19:11.552 "zone_management": false, 00:19:11.552 "zone_append": false, 00:19:11.552 "compare": false, 00:19:11.552 "compare_and_write": false, 00:19:11.552 "abort": false, 00:19:11.552 "seek_hole": false, 00:19:11.552 "seek_data": false, 00:19:11.552 "copy": false, 00:19:11.552 "nvme_iov_md": false 00:19:11.552 }, 00:19:11.552 "memory_domains": [ 00:19:11.552 { 00:19:11.552 "dma_device_id": "system", 00:19:11.552 "dma_device_type": 1 00:19:11.552 }, 00:19:11.552 { 00:19:11.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.552 "dma_device_type": 2 00:19:11.552 }, 00:19:11.552 { 00:19:11.552 "dma_device_id": "system", 00:19:11.552 "dma_device_type": 1 00:19:11.552 }, 00:19:11.552 { 00:19:11.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.552 "dma_device_type": 2 00:19:11.552 } 00:19:11.552 ], 00:19:11.552 "driver_specific": { 00:19:11.552 "raid": { 00:19:11.552 "uuid": "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f", 00:19:11.552 "strip_size_kb": 0, 00:19:11.552 "state": "online", 00:19:11.552 "raid_level": "raid1", 00:19:11.552 "superblock": true, 00:19:11.552 "num_base_bdevs": 2, 00:19:11.552 "num_base_bdevs_discovered": 2, 00:19:11.552 "num_base_bdevs_operational": 2, 00:19:11.552 "base_bdevs_list": [ 00:19:11.552 { 00:19:11.552 "name": "pt1", 00:19:11.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.553 "is_configured": true, 00:19:11.553 "data_offset": 256, 00:19:11.553 "data_size": 7936 00:19:11.553 }, 00:19:11.553 { 00:19:11.553 "name": "pt2", 00:19:11.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.553 "is_configured": true, 00:19:11.553 "data_offset": 256, 00:19:11.553 "data_size": 7936 00:19:11.553 } 00:19:11.553 ] 00:19:11.553 } 00:19:11.553 } 00:19:11.553 }' 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:11.553 pt2' 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.553 [2024-09-30 12:36:23.405446] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f '!=' 3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f ']' 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.553 [2024-09-30 12:36:23.437218] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.553 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.813 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.813 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.813 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.813 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.813 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.813 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.813 "name": "raid_bdev1", 00:19:11.813 "uuid": "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f", 00:19:11.813 "strip_size_kb": 0, 00:19:11.813 "state": "online", 00:19:11.813 "raid_level": "raid1", 00:19:11.813 "superblock": true, 00:19:11.813 "num_base_bdevs": 2, 00:19:11.813 "num_base_bdevs_discovered": 1, 00:19:11.813 "num_base_bdevs_operational": 1, 00:19:11.813 "base_bdevs_list": [ 00:19:11.813 { 00:19:11.813 "name": null, 00:19:11.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.813 "is_configured": false, 00:19:11.813 "data_offset": 0, 00:19:11.813 "data_size": 7936 00:19:11.813 }, 00:19:11.813 { 00:19:11.813 "name": "pt2", 00:19:11.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.813 "is_configured": true, 00:19:11.813 "data_offset": 256, 00:19:11.813 "data_size": 7936 00:19:11.813 } 00:19:11.813 ] 00:19:11.813 }' 00:19:11.813 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.813 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.073 [2024-09-30 12:36:23.864454] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.073 [2024-09-30 12:36:23.864514] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.073 [2024-09-30 12:36:23.864581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.073 [2024-09-30 12:36:23.864630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.073 [2024-09-30 12:36:23.864664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.073 [2024-09-30 12:36:23.920360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:12.073 [2024-09-30 12:36:23.920417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.073 [2024-09-30 12:36:23.920430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:12.073 [2024-09-30 12:36:23.920440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.073 [2024-09-30 12:36:23.922245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.073 [2024-09-30 12:36:23.922287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:12.073 [2024-09-30 12:36:23.922327] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:12.073 [2024-09-30 12:36:23.922373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:12.073 [2024-09-30 12:36:23.922422] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:12.073 [2024-09-30 12:36:23.922434] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:12.073 [2024-09-30 12:36:23.922513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:12.073 [2024-09-30 12:36:23.922571] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:12.073 [2024-09-30 12:36:23.922578] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:12.073 [2024-09-30 12:36:23.922627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.073 pt2 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.073 "name": "raid_bdev1", 00:19:12.073 "uuid": "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f", 00:19:12.073 "strip_size_kb": 0, 00:19:12.073 "state": "online", 00:19:12.073 "raid_level": "raid1", 00:19:12.073 "superblock": true, 00:19:12.073 "num_base_bdevs": 2, 00:19:12.073 "num_base_bdevs_discovered": 1, 00:19:12.073 "num_base_bdevs_operational": 1, 00:19:12.073 "base_bdevs_list": [ 00:19:12.073 { 00:19:12.073 "name": null, 00:19:12.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.073 "is_configured": false, 00:19:12.073 "data_offset": 256, 00:19:12.073 "data_size": 7936 00:19:12.073 }, 00:19:12.073 { 00:19:12.073 "name": "pt2", 00:19:12.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:12.073 "is_configured": true, 00:19:12.073 "data_offset": 256, 00:19:12.073 "data_size": 7936 00:19:12.073 } 00:19:12.073 ] 00:19:12.073 }' 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.073 12:36:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.640 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:12.640 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.640 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.640 [2024-09-30 12:36:24.391546] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.640 [2024-09-30 12:36:24.391617] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.640 [2024-09-30 12:36:24.391695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.641 [2024-09-30 12:36:24.391745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.641 [2024-09-30 12:36:24.391796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.641 [2024-09-30 12:36:24.455461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:12.641 [2024-09-30 12:36:24.455541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.641 [2024-09-30 12:36:24.455571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:12.641 [2024-09-30 12:36:24.455597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.641 [2024-09-30 12:36:24.457409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.641 [2024-09-30 12:36:24.457478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:12.641 [2024-09-30 12:36:24.457537] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:12.641 [2024-09-30 12:36:24.457595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:12.641 [2024-09-30 12:36:24.457682] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:12.641 [2024-09-30 12:36:24.457775] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.641 [2024-09-30 12:36:24.457834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:12.641 [2024-09-30 12:36:24.457924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:12.641 [2024-09-30 12:36:24.458018] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:12.641 [2024-09-30 12:36:24.458055] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:12.641 [2024-09-30 12:36:24.458113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:12.641 [2024-09-30 12:36:24.458170] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:12.641 [2024-09-30 12:36:24.458180] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:12.641 [2024-09-30 12:36:24.458240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.641 pt1 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.641 "name": "raid_bdev1", 00:19:12.641 "uuid": "3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f", 00:19:12.641 "strip_size_kb": 0, 00:19:12.641 "state": "online", 00:19:12.641 "raid_level": "raid1", 00:19:12.641 "superblock": true, 00:19:12.641 "num_base_bdevs": 2, 00:19:12.641 "num_base_bdevs_discovered": 1, 00:19:12.641 "num_base_bdevs_operational": 1, 00:19:12.641 "base_bdevs_list": [ 00:19:12.641 { 00:19:12.641 "name": null, 00:19:12.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.641 "is_configured": false, 00:19:12.641 "data_offset": 256, 00:19:12.641 "data_size": 7936 00:19:12.641 }, 00:19:12.641 { 00:19:12.641 "name": "pt2", 00:19:12.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:12.641 "is_configured": true, 00:19:12.641 "data_offset": 256, 00:19:12.641 "data_size": 7936 00:19:12.641 } 00:19:12.641 ] 00:19:12.641 }' 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.641 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.209 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:13.209 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:13.209 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.209 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.209 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.209 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:13.209 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:13.209 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.209 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.209 12:36:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:13.209 [2024-09-30 12:36:24.990721] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f '!=' 3b3b989b-b4a7-4d1f-b4b3-c49eb0356b8f ']' 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88565 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88565 ']' 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88565 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88565 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88565' 00:19:13.209 killing process with pid 88565 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 88565 00:19:13.209 [2024-09-30 12:36:25.075546] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:13.209 [2024-09-30 12:36:25.075619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.209 [2024-09-30 12:36:25.075651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.209 [2024-09-30 12:36:25.075667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:13.209 12:36:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 88565 00:19:13.468 [2024-09-30 12:36:25.272392] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.848 12:36:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:14.848 00:19:14.848 real 0m6.266s 00:19:14.848 user 0m9.416s 00:19:14.848 sys 0m1.148s 00:19:14.848 ************************************ 00:19:14.848 END TEST raid_superblock_test_md_interleaved 00:19:14.848 ************************************ 00:19:14.848 12:36:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:14.848 12:36:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.848 12:36:26 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:14.848 12:36:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:14.848 12:36:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:14.848 12:36:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.848 ************************************ 00:19:14.848 START TEST raid_rebuild_test_sb_md_interleaved 00:19:14.848 ************************************ 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:14.848 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88894 00:19:14.849 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:14.849 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88894 00:19:14.849 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88894 ']' 00:19:14.849 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.849 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.849 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.849 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.849 12:36:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.849 [2024-09-30 12:36:26.652112] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:14.849 [2024-09-30 12:36:26.652247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88894 ] 00:19:14.849 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:14.849 Zero copy mechanism will not be used. 00:19:15.108 [2024-09-30 12:36:26.822174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.368 [2024-09-30 12:36:27.007562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.368 [2024-09-30 12:36:27.195234] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.368 [2024-09-30 12:36:27.195294] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.628 BaseBdev1_malloc 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.628 [2024-09-30 12:36:27.502570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:15.628 [2024-09-30 12:36:27.502630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.628 [2024-09-30 12:36:27.502649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:15.628 [2024-09-30 12:36:27.502659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.628 [2024-09-30 12:36:27.504435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.628 [2024-09-30 12:36:27.504475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:15.628 BaseBdev1 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.628 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.888 BaseBdev2_malloc 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.888 [2024-09-30 12:36:27.562710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:15.888 [2024-09-30 12:36:27.562781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.888 [2024-09-30 12:36:27.562798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:15.888 [2024-09-30 12:36:27.562808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.888 [2024-09-30 12:36:27.564515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.888 [2024-09-30 12:36:27.564554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:15.888 BaseBdev2 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.888 spare_malloc 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.888 spare_delay 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.888 [2024-09-30 12:36:27.630230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:15.888 [2024-09-30 12:36:27.630284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.888 [2024-09-30 12:36:27.630303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:15.888 [2024-09-30 12:36:27.630314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.888 [2024-09-30 12:36:27.632061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.888 [2024-09-30 12:36:27.632101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:15.888 spare 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.888 [2024-09-30 12:36:27.642267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.888 [2024-09-30 12:36:27.643972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.888 [2024-09-30 12:36:27.644156] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:15.888 [2024-09-30 12:36:27.644179] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:15.888 [2024-09-30 12:36:27.644244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:15.888 [2024-09-30 12:36:27.644324] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:15.888 [2024-09-30 12:36:27.644333] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:15.888 [2024-09-30 12:36:27.644397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.888 "name": "raid_bdev1", 00:19:15.888 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:15.888 "strip_size_kb": 0, 00:19:15.888 "state": "online", 00:19:15.888 "raid_level": "raid1", 00:19:15.888 "superblock": true, 00:19:15.888 "num_base_bdevs": 2, 00:19:15.888 "num_base_bdevs_discovered": 2, 00:19:15.888 "num_base_bdevs_operational": 2, 00:19:15.888 "base_bdevs_list": [ 00:19:15.888 { 00:19:15.888 "name": "BaseBdev1", 00:19:15.888 "uuid": "914990be-a23f-532a-b620-35ee978071b7", 00:19:15.888 "is_configured": true, 00:19:15.888 "data_offset": 256, 00:19:15.888 "data_size": 7936 00:19:15.888 }, 00:19:15.888 { 00:19:15.888 "name": "BaseBdev2", 00:19:15.888 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:15.888 "is_configured": true, 00:19:15.888 "data_offset": 256, 00:19:15.888 "data_size": 7936 00:19:15.888 } 00:19:15.888 ] 00:19:15.888 }' 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.888 12:36:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.458 [2024-09-30 12:36:28.105656] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.458 [2024-09-30 12:36:28.177293] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.458 "name": "raid_bdev1", 00:19:16.458 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:16.458 "strip_size_kb": 0, 00:19:16.458 "state": "online", 00:19:16.458 "raid_level": "raid1", 00:19:16.458 "superblock": true, 00:19:16.458 "num_base_bdevs": 2, 00:19:16.458 "num_base_bdevs_discovered": 1, 00:19:16.458 "num_base_bdevs_operational": 1, 00:19:16.458 "base_bdevs_list": [ 00:19:16.458 { 00:19:16.458 "name": null, 00:19:16.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.458 "is_configured": false, 00:19:16.458 "data_offset": 0, 00:19:16.458 "data_size": 7936 00:19:16.458 }, 00:19:16.458 { 00:19:16.458 "name": "BaseBdev2", 00:19:16.458 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:16.458 "is_configured": true, 00:19:16.458 "data_offset": 256, 00:19:16.458 "data_size": 7936 00:19:16.458 } 00:19:16.458 ] 00:19:16.458 }' 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.458 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.717 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:16.717 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.717 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.717 [2024-09-30 12:36:28.572636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.717 [2024-09-30 12:36:28.589351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:16.717 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.717 12:36:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:16.717 [2024-09-30 12:36:28.591121] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.098 "name": "raid_bdev1", 00:19:18.098 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:18.098 "strip_size_kb": 0, 00:19:18.098 "state": "online", 00:19:18.098 "raid_level": "raid1", 00:19:18.098 "superblock": true, 00:19:18.098 "num_base_bdevs": 2, 00:19:18.098 "num_base_bdevs_discovered": 2, 00:19:18.098 "num_base_bdevs_operational": 2, 00:19:18.098 "process": { 00:19:18.098 "type": "rebuild", 00:19:18.098 "target": "spare", 00:19:18.098 "progress": { 00:19:18.098 "blocks": 2560, 00:19:18.098 "percent": 32 00:19:18.098 } 00:19:18.098 }, 00:19:18.098 "base_bdevs_list": [ 00:19:18.098 { 00:19:18.098 "name": "spare", 00:19:18.098 "uuid": "3576e3ad-6390-5bd6-aa4b-76151689c25b", 00:19:18.098 "is_configured": true, 00:19:18.098 "data_offset": 256, 00:19:18.098 "data_size": 7936 00:19:18.098 }, 00:19:18.098 { 00:19:18.098 "name": "BaseBdev2", 00:19:18.098 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:18.098 "is_configured": true, 00:19:18.098 "data_offset": 256, 00:19:18.098 "data_size": 7936 00:19:18.098 } 00:19:18.098 ] 00:19:18.098 }' 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.098 [2024-09-30 12:36:29.726900] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.098 [2024-09-30 12:36:29.795835] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:18.098 [2024-09-30 12:36:29.795896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.098 [2024-09-30 12:36:29.795911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.098 [2024-09-30 12:36:29.795920] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.098 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.099 "name": "raid_bdev1", 00:19:18.099 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:18.099 "strip_size_kb": 0, 00:19:18.099 "state": "online", 00:19:18.099 "raid_level": "raid1", 00:19:18.099 "superblock": true, 00:19:18.099 "num_base_bdevs": 2, 00:19:18.099 "num_base_bdevs_discovered": 1, 00:19:18.099 "num_base_bdevs_operational": 1, 00:19:18.099 "base_bdevs_list": [ 00:19:18.099 { 00:19:18.099 "name": null, 00:19:18.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.099 "is_configured": false, 00:19:18.099 "data_offset": 0, 00:19:18.099 "data_size": 7936 00:19:18.099 }, 00:19:18.099 { 00:19:18.099 "name": "BaseBdev2", 00:19:18.099 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:18.099 "is_configured": true, 00:19:18.099 "data_offset": 256, 00:19:18.099 "data_size": 7936 00:19:18.099 } 00:19:18.099 ] 00:19:18.099 }' 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.099 12:36:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.669 "name": "raid_bdev1", 00:19:18.669 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:18.669 "strip_size_kb": 0, 00:19:18.669 "state": "online", 00:19:18.669 "raid_level": "raid1", 00:19:18.669 "superblock": true, 00:19:18.669 "num_base_bdevs": 2, 00:19:18.669 "num_base_bdevs_discovered": 1, 00:19:18.669 "num_base_bdevs_operational": 1, 00:19:18.669 "base_bdevs_list": [ 00:19:18.669 { 00:19:18.669 "name": null, 00:19:18.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.669 "is_configured": false, 00:19:18.669 "data_offset": 0, 00:19:18.669 "data_size": 7936 00:19:18.669 }, 00:19:18.669 { 00:19:18.669 "name": "BaseBdev2", 00:19:18.669 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:18.669 "is_configured": true, 00:19:18.669 "data_offset": 256, 00:19:18.669 "data_size": 7936 00:19:18.669 } 00:19:18.669 ] 00:19:18.669 }' 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.669 [2024-09-30 12:36:30.426064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.669 [2024-09-30 12:36:30.440956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.669 12:36:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:18.669 [2024-09-30 12:36:30.442583] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.609 "name": "raid_bdev1", 00:19:19.609 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:19.609 "strip_size_kb": 0, 00:19:19.609 "state": "online", 00:19:19.609 "raid_level": "raid1", 00:19:19.609 "superblock": true, 00:19:19.609 "num_base_bdevs": 2, 00:19:19.609 "num_base_bdevs_discovered": 2, 00:19:19.609 "num_base_bdevs_operational": 2, 00:19:19.609 "process": { 00:19:19.609 "type": "rebuild", 00:19:19.609 "target": "spare", 00:19:19.609 "progress": { 00:19:19.609 "blocks": 2560, 00:19:19.609 "percent": 32 00:19:19.609 } 00:19:19.609 }, 00:19:19.609 "base_bdevs_list": [ 00:19:19.609 { 00:19:19.609 "name": "spare", 00:19:19.609 "uuid": "3576e3ad-6390-5bd6-aa4b-76151689c25b", 00:19:19.609 "is_configured": true, 00:19:19.609 "data_offset": 256, 00:19:19.609 "data_size": 7936 00:19:19.609 }, 00:19:19.609 { 00:19:19.609 "name": "BaseBdev2", 00:19:19.609 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:19.609 "is_configured": true, 00:19:19.609 "data_offset": 256, 00:19:19.609 "data_size": 7936 00:19:19.609 } 00:19:19.609 ] 00:19:19.609 }' 00:19:19.609 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:19.869 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=736 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.869 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.870 "name": "raid_bdev1", 00:19:19.870 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:19.870 "strip_size_kb": 0, 00:19:19.870 "state": "online", 00:19:19.870 "raid_level": "raid1", 00:19:19.870 "superblock": true, 00:19:19.870 "num_base_bdevs": 2, 00:19:19.870 "num_base_bdevs_discovered": 2, 00:19:19.870 "num_base_bdevs_operational": 2, 00:19:19.870 "process": { 00:19:19.870 "type": "rebuild", 00:19:19.870 "target": "spare", 00:19:19.870 "progress": { 00:19:19.870 "blocks": 2816, 00:19:19.870 "percent": 35 00:19:19.870 } 00:19:19.870 }, 00:19:19.870 "base_bdevs_list": [ 00:19:19.870 { 00:19:19.870 "name": "spare", 00:19:19.870 "uuid": "3576e3ad-6390-5bd6-aa4b-76151689c25b", 00:19:19.870 "is_configured": true, 00:19:19.870 "data_offset": 256, 00:19:19.870 "data_size": 7936 00:19:19.870 }, 00:19:19.870 { 00:19:19.870 "name": "BaseBdev2", 00:19:19.870 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:19.870 "is_configured": true, 00:19:19.870 "data_offset": 256, 00:19:19.870 "data_size": 7936 00:19:19.870 } 00:19:19.870 ] 00:19:19.870 }' 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.870 12:36:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:21.252 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:21.252 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.252 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.253 "name": "raid_bdev1", 00:19:21.253 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:21.253 "strip_size_kb": 0, 00:19:21.253 "state": "online", 00:19:21.253 "raid_level": "raid1", 00:19:21.253 "superblock": true, 00:19:21.253 "num_base_bdevs": 2, 00:19:21.253 "num_base_bdevs_discovered": 2, 00:19:21.253 "num_base_bdevs_operational": 2, 00:19:21.253 "process": { 00:19:21.253 "type": "rebuild", 00:19:21.253 "target": "spare", 00:19:21.253 "progress": { 00:19:21.253 "blocks": 5632, 00:19:21.253 "percent": 70 00:19:21.253 } 00:19:21.253 }, 00:19:21.253 "base_bdevs_list": [ 00:19:21.253 { 00:19:21.253 "name": "spare", 00:19:21.253 "uuid": "3576e3ad-6390-5bd6-aa4b-76151689c25b", 00:19:21.253 "is_configured": true, 00:19:21.253 "data_offset": 256, 00:19:21.253 "data_size": 7936 00:19:21.253 }, 00:19:21.253 { 00:19:21.253 "name": "BaseBdev2", 00:19:21.253 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:21.253 "is_configured": true, 00:19:21.253 "data_offset": 256, 00:19:21.253 "data_size": 7936 00:19:21.253 } 00:19:21.253 ] 00:19:21.253 }' 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.253 12:36:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:21.824 [2024-09-30 12:36:33.554130] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:21.824 [2024-09-30 12:36:33.554197] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:21.824 [2024-09-30 12:36:33.554292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.084 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:22.084 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.084 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.085 "name": "raid_bdev1", 00:19:22.085 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:22.085 "strip_size_kb": 0, 00:19:22.085 "state": "online", 00:19:22.085 "raid_level": "raid1", 00:19:22.085 "superblock": true, 00:19:22.085 "num_base_bdevs": 2, 00:19:22.085 "num_base_bdevs_discovered": 2, 00:19:22.085 "num_base_bdevs_operational": 2, 00:19:22.085 "base_bdevs_list": [ 00:19:22.085 { 00:19:22.085 "name": "spare", 00:19:22.085 "uuid": "3576e3ad-6390-5bd6-aa4b-76151689c25b", 00:19:22.085 "is_configured": true, 00:19:22.085 "data_offset": 256, 00:19:22.085 "data_size": 7936 00:19:22.085 }, 00:19:22.085 { 00:19:22.085 "name": "BaseBdev2", 00:19:22.085 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:22.085 "is_configured": true, 00:19:22.085 "data_offset": 256, 00:19:22.085 "data_size": 7936 00:19:22.085 } 00:19:22.085 ] 00:19:22.085 }' 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:22.085 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.346 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:22.346 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:22.346 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.346 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.346 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.346 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.346 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.346 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.346 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.346 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.346 12:36:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.346 "name": "raid_bdev1", 00:19:22.346 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:22.346 "strip_size_kb": 0, 00:19:22.346 "state": "online", 00:19:22.346 "raid_level": "raid1", 00:19:22.346 "superblock": true, 00:19:22.346 "num_base_bdevs": 2, 00:19:22.346 "num_base_bdevs_discovered": 2, 00:19:22.346 "num_base_bdevs_operational": 2, 00:19:22.346 "base_bdevs_list": [ 00:19:22.346 { 00:19:22.346 "name": "spare", 00:19:22.346 "uuid": "3576e3ad-6390-5bd6-aa4b-76151689c25b", 00:19:22.346 "is_configured": true, 00:19:22.346 "data_offset": 256, 00:19:22.346 "data_size": 7936 00:19:22.346 }, 00:19:22.346 { 00:19:22.346 "name": "BaseBdev2", 00:19:22.346 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:22.346 "is_configured": true, 00:19:22.346 "data_offset": 256, 00:19:22.346 "data_size": 7936 00:19:22.346 } 00:19:22.346 ] 00:19:22.346 }' 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.346 "name": "raid_bdev1", 00:19:22.346 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:22.346 "strip_size_kb": 0, 00:19:22.346 "state": "online", 00:19:22.346 "raid_level": "raid1", 00:19:22.346 "superblock": true, 00:19:22.346 "num_base_bdevs": 2, 00:19:22.346 "num_base_bdevs_discovered": 2, 00:19:22.346 "num_base_bdevs_operational": 2, 00:19:22.346 "base_bdevs_list": [ 00:19:22.346 { 00:19:22.346 "name": "spare", 00:19:22.346 "uuid": "3576e3ad-6390-5bd6-aa4b-76151689c25b", 00:19:22.346 "is_configured": true, 00:19:22.346 "data_offset": 256, 00:19:22.346 "data_size": 7936 00:19:22.346 }, 00:19:22.346 { 00:19:22.346 "name": "BaseBdev2", 00:19:22.346 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:22.346 "is_configured": true, 00:19:22.346 "data_offset": 256, 00:19:22.346 "data_size": 7936 00:19:22.346 } 00:19:22.346 ] 00:19:22.346 }' 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.346 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.917 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:22.917 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.917 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.917 [2024-09-30 12:36:34.607145] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.917 [2024-09-30 12:36:34.607180] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:22.917 [2024-09-30 12:36:34.607263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.917 [2024-09-30 12:36:34.607326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:22.918 [2024-09-30 12:36:34.607336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.918 [2024-09-30 12:36:34.679016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:22.918 [2024-09-30 12:36:34.679067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.918 [2024-09-30 12:36:34.679086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:22.918 [2024-09-30 12:36:34.679095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.918 [2024-09-30 12:36:34.680907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.918 [2024-09-30 12:36:34.680944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:22.918 [2024-09-30 12:36:34.680992] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:22.918 [2024-09-30 12:36:34.681046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:22.918 [2024-09-30 12:36:34.681140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:22.918 spare 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.918 [2024-09-30 12:36:34.781038] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:22.918 [2024-09-30 12:36:34.781066] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:22.918 [2024-09-30 12:36:34.781148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:22.918 [2024-09-30 12:36:34.781221] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:22.918 [2024-09-30 12:36:34.781229] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:22.918 [2024-09-30 12:36:34.781298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.918 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.178 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.178 "name": "raid_bdev1", 00:19:23.178 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:23.178 "strip_size_kb": 0, 00:19:23.178 "state": "online", 00:19:23.178 "raid_level": "raid1", 00:19:23.178 "superblock": true, 00:19:23.178 "num_base_bdevs": 2, 00:19:23.178 "num_base_bdevs_discovered": 2, 00:19:23.178 "num_base_bdevs_operational": 2, 00:19:23.178 "base_bdevs_list": [ 00:19:23.178 { 00:19:23.178 "name": "spare", 00:19:23.178 "uuid": "3576e3ad-6390-5bd6-aa4b-76151689c25b", 00:19:23.178 "is_configured": true, 00:19:23.178 "data_offset": 256, 00:19:23.178 "data_size": 7936 00:19:23.179 }, 00:19:23.179 { 00:19:23.179 "name": "BaseBdev2", 00:19:23.179 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:23.179 "is_configured": true, 00:19:23.179 "data_offset": 256, 00:19:23.179 "data_size": 7936 00:19:23.179 } 00:19:23.179 ] 00:19:23.179 }' 00:19:23.179 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.179 12:36:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.439 "name": "raid_bdev1", 00:19:23.439 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:23.439 "strip_size_kb": 0, 00:19:23.439 "state": "online", 00:19:23.439 "raid_level": "raid1", 00:19:23.439 "superblock": true, 00:19:23.439 "num_base_bdevs": 2, 00:19:23.439 "num_base_bdevs_discovered": 2, 00:19:23.439 "num_base_bdevs_operational": 2, 00:19:23.439 "base_bdevs_list": [ 00:19:23.439 { 00:19:23.439 "name": "spare", 00:19:23.439 "uuid": "3576e3ad-6390-5bd6-aa4b-76151689c25b", 00:19:23.439 "is_configured": true, 00:19:23.439 "data_offset": 256, 00:19:23.439 "data_size": 7936 00:19:23.439 }, 00:19:23.439 { 00:19:23.439 "name": "BaseBdev2", 00:19:23.439 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:23.439 "is_configured": true, 00:19:23.439 "data_offset": 256, 00:19:23.439 "data_size": 7936 00:19:23.439 } 00:19:23.439 ] 00:19:23.439 }' 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.439 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.700 [2024-09-30 12:36:35.373863] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.700 "name": "raid_bdev1", 00:19:23.700 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:23.700 "strip_size_kb": 0, 00:19:23.700 "state": "online", 00:19:23.700 "raid_level": "raid1", 00:19:23.700 "superblock": true, 00:19:23.700 "num_base_bdevs": 2, 00:19:23.700 "num_base_bdevs_discovered": 1, 00:19:23.700 "num_base_bdevs_operational": 1, 00:19:23.700 "base_bdevs_list": [ 00:19:23.700 { 00:19:23.700 "name": null, 00:19:23.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.700 "is_configured": false, 00:19:23.700 "data_offset": 0, 00:19:23.700 "data_size": 7936 00:19:23.700 }, 00:19:23.700 { 00:19:23.700 "name": "BaseBdev2", 00:19:23.700 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:23.700 "is_configured": true, 00:19:23.700 "data_offset": 256, 00:19:23.700 "data_size": 7936 00:19:23.700 } 00:19:23.700 ] 00:19:23.700 }' 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.700 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.269 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:24.269 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.269 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.269 [2024-09-30 12:36:35.877136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:24.269 [2024-09-30 12:36:35.877288] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:24.269 [2024-09-30 12:36:35.877306] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:24.269 [2024-09-30 12:36:35.877340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:24.269 [2024-09-30 12:36:35.891901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:24.269 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.269 12:36:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:24.269 [2024-09-30 12:36:35.893708] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.209 "name": "raid_bdev1", 00:19:25.209 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:25.209 "strip_size_kb": 0, 00:19:25.209 "state": "online", 00:19:25.209 "raid_level": "raid1", 00:19:25.209 "superblock": true, 00:19:25.209 "num_base_bdevs": 2, 00:19:25.209 "num_base_bdevs_discovered": 2, 00:19:25.209 "num_base_bdevs_operational": 2, 00:19:25.209 "process": { 00:19:25.209 "type": "rebuild", 00:19:25.209 "target": "spare", 00:19:25.209 "progress": { 00:19:25.209 "blocks": 2560, 00:19:25.209 "percent": 32 00:19:25.209 } 00:19:25.209 }, 00:19:25.209 "base_bdevs_list": [ 00:19:25.209 { 00:19:25.209 "name": "spare", 00:19:25.209 "uuid": "3576e3ad-6390-5bd6-aa4b-76151689c25b", 00:19:25.209 "is_configured": true, 00:19:25.209 "data_offset": 256, 00:19:25.209 "data_size": 7936 00:19:25.209 }, 00:19:25.209 { 00:19:25.209 "name": "BaseBdev2", 00:19:25.209 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:25.209 "is_configured": true, 00:19:25.209 "data_offset": 256, 00:19:25.209 "data_size": 7936 00:19:25.209 } 00:19:25.209 ] 00:19:25.209 }' 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.209 12:36:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.209 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.209 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:25.209 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.209 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.209 [2024-09-30 12:36:37.049438] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.209 [2024-09-30 12:36:37.098349] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:25.209 [2024-09-30 12:36:37.098405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.209 [2024-09-30 12:36:37.098418] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.209 [2024-09-30 12:36:37.098426] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.470 "name": "raid_bdev1", 00:19:25.470 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:25.470 "strip_size_kb": 0, 00:19:25.470 "state": "online", 00:19:25.470 "raid_level": "raid1", 00:19:25.470 "superblock": true, 00:19:25.470 "num_base_bdevs": 2, 00:19:25.470 "num_base_bdevs_discovered": 1, 00:19:25.470 "num_base_bdevs_operational": 1, 00:19:25.470 "base_bdevs_list": [ 00:19:25.470 { 00:19:25.470 "name": null, 00:19:25.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.470 "is_configured": false, 00:19:25.470 "data_offset": 0, 00:19:25.470 "data_size": 7936 00:19:25.470 }, 00:19:25.470 { 00:19:25.470 "name": "BaseBdev2", 00:19:25.470 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:25.470 "is_configured": true, 00:19:25.470 "data_offset": 256, 00:19:25.470 "data_size": 7936 00:19:25.470 } 00:19:25.470 ] 00:19:25.470 }' 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.470 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.730 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:25.730 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.730 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.730 [2024-09-30 12:36:37.595726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:25.730 [2024-09-30 12:36:37.595789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.730 [2024-09-30 12:36:37.595806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:25.730 [2024-09-30 12:36:37.595816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.730 [2024-09-30 12:36:37.595994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.730 [2024-09-30 12:36:37.596017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:25.730 [2024-09-30 12:36:37.596062] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:25.730 [2024-09-30 12:36:37.596074] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:25.730 [2024-09-30 12:36:37.596087] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:25.730 [2024-09-30 12:36:37.596107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:25.730 [2024-09-30 12:36:37.610292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:25.730 spare 00:19:25.730 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.730 [2024-09-30 12:36:37.612036] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:25.730 12:36:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.112 "name": "raid_bdev1", 00:19:27.112 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:27.112 "strip_size_kb": 0, 00:19:27.112 "state": "online", 00:19:27.112 "raid_level": "raid1", 00:19:27.112 "superblock": true, 00:19:27.112 "num_base_bdevs": 2, 00:19:27.112 "num_base_bdevs_discovered": 2, 00:19:27.112 "num_base_bdevs_operational": 2, 00:19:27.112 "process": { 00:19:27.112 "type": "rebuild", 00:19:27.112 "target": "spare", 00:19:27.112 "progress": { 00:19:27.112 "blocks": 2560, 00:19:27.112 "percent": 32 00:19:27.112 } 00:19:27.112 }, 00:19:27.112 "base_bdevs_list": [ 00:19:27.112 { 00:19:27.112 "name": "spare", 00:19:27.112 "uuid": "3576e3ad-6390-5bd6-aa4b-76151689c25b", 00:19:27.112 "is_configured": true, 00:19:27.112 "data_offset": 256, 00:19:27.112 "data_size": 7936 00:19:27.112 }, 00:19:27.112 { 00:19:27.112 "name": "BaseBdev2", 00:19:27.112 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:27.112 "is_configured": true, 00:19:27.112 "data_offset": 256, 00:19:27.112 "data_size": 7936 00:19:27.112 } 00:19:27.112 ] 00:19:27.112 }' 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.112 [2024-09-30 12:36:38.775826] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.112 [2024-09-30 12:36:38.816721] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:27.112 [2024-09-30 12:36:38.816804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.112 [2024-09-30 12:36:38.816822] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.112 [2024-09-30 12:36:38.816829] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.112 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.112 "name": "raid_bdev1", 00:19:27.112 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:27.112 "strip_size_kb": 0, 00:19:27.112 "state": "online", 00:19:27.112 "raid_level": "raid1", 00:19:27.112 "superblock": true, 00:19:27.112 "num_base_bdevs": 2, 00:19:27.112 "num_base_bdevs_discovered": 1, 00:19:27.112 "num_base_bdevs_operational": 1, 00:19:27.112 "base_bdevs_list": [ 00:19:27.112 { 00:19:27.113 "name": null, 00:19:27.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.113 "is_configured": false, 00:19:27.113 "data_offset": 0, 00:19:27.113 "data_size": 7936 00:19:27.113 }, 00:19:27.113 { 00:19:27.113 "name": "BaseBdev2", 00:19:27.113 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:27.113 "is_configured": true, 00:19:27.113 "data_offset": 256, 00:19:27.113 "data_size": 7936 00:19:27.113 } 00:19:27.113 ] 00:19:27.113 }' 00:19:27.113 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.113 12:36:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.681 "name": "raid_bdev1", 00:19:27.681 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:27.681 "strip_size_kb": 0, 00:19:27.681 "state": "online", 00:19:27.681 "raid_level": "raid1", 00:19:27.681 "superblock": true, 00:19:27.681 "num_base_bdevs": 2, 00:19:27.681 "num_base_bdevs_discovered": 1, 00:19:27.681 "num_base_bdevs_operational": 1, 00:19:27.681 "base_bdevs_list": [ 00:19:27.681 { 00:19:27.681 "name": null, 00:19:27.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.681 "is_configured": false, 00:19:27.681 "data_offset": 0, 00:19:27.681 "data_size": 7936 00:19:27.681 }, 00:19:27.681 { 00:19:27.681 "name": "BaseBdev2", 00:19:27.681 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:27.681 "is_configured": true, 00:19:27.681 "data_offset": 256, 00:19:27.681 "data_size": 7936 00:19:27.681 } 00:19:27.681 ] 00:19:27.681 }' 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.681 [2024-09-30 12:36:39.411434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:27.681 [2024-09-30 12:36:39.411489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.681 [2024-09-30 12:36:39.411510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:27.681 [2024-09-30 12:36:39.411519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.681 [2024-09-30 12:36:39.411683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.681 [2024-09-30 12:36:39.411695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:27.681 [2024-09-30 12:36:39.411738] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:27.681 [2024-09-30 12:36:39.411761] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:27.681 [2024-09-30 12:36:39.411770] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:27.681 [2024-09-30 12:36:39.411779] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:27.681 BaseBdev1 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.681 12:36:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.621 "name": "raid_bdev1", 00:19:28.621 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:28.621 "strip_size_kb": 0, 00:19:28.621 "state": "online", 00:19:28.621 "raid_level": "raid1", 00:19:28.621 "superblock": true, 00:19:28.621 "num_base_bdevs": 2, 00:19:28.621 "num_base_bdevs_discovered": 1, 00:19:28.621 "num_base_bdevs_operational": 1, 00:19:28.621 "base_bdevs_list": [ 00:19:28.621 { 00:19:28.621 "name": null, 00:19:28.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.621 "is_configured": false, 00:19:28.621 "data_offset": 0, 00:19:28.621 "data_size": 7936 00:19:28.621 }, 00:19:28.621 { 00:19:28.621 "name": "BaseBdev2", 00:19:28.621 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:28.621 "is_configured": true, 00:19:28.621 "data_offset": 256, 00:19:28.621 "data_size": 7936 00:19:28.621 } 00:19:28.621 ] 00:19:28.621 }' 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.621 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.191 "name": "raid_bdev1", 00:19:29.191 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:29.191 "strip_size_kb": 0, 00:19:29.191 "state": "online", 00:19:29.191 "raid_level": "raid1", 00:19:29.191 "superblock": true, 00:19:29.191 "num_base_bdevs": 2, 00:19:29.191 "num_base_bdevs_discovered": 1, 00:19:29.191 "num_base_bdevs_operational": 1, 00:19:29.191 "base_bdevs_list": [ 00:19:29.191 { 00:19:29.191 "name": null, 00:19:29.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.191 "is_configured": false, 00:19:29.191 "data_offset": 0, 00:19:29.191 "data_size": 7936 00:19:29.191 }, 00:19:29.191 { 00:19:29.191 "name": "BaseBdev2", 00:19:29.191 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:29.191 "is_configured": true, 00:19:29.191 "data_offset": 256, 00:19:29.191 "data_size": 7936 00:19:29.191 } 00:19:29.191 ] 00:19:29.191 }' 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.191 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.191 [2024-09-30 12:36:40.988728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.191 [2024-09-30 12:36:40.988873] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:29.191 [2024-09-30 12:36:40.988890] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:29.191 request: 00:19:29.191 { 00:19:29.191 "base_bdev": "BaseBdev1", 00:19:29.192 "raid_bdev": "raid_bdev1", 00:19:29.192 "method": "bdev_raid_add_base_bdev", 00:19:29.192 "req_id": 1 00:19:29.192 } 00:19:29.192 Got JSON-RPC error response 00:19:29.192 response: 00:19:29.192 { 00:19:29.192 "code": -22, 00:19:29.192 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:29.192 } 00:19:29.192 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:29.192 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:29.192 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:29.192 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:29.192 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:29.192 12:36:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:30.131 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:30.131 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.131 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.131 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.131 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.131 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.131 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.131 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.131 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.131 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.132 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.132 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.132 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.132 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.132 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.391 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.391 "name": "raid_bdev1", 00:19:30.391 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:30.391 "strip_size_kb": 0, 00:19:30.391 "state": "online", 00:19:30.391 "raid_level": "raid1", 00:19:30.391 "superblock": true, 00:19:30.391 "num_base_bdevs": 2, 00:19:30.391 "num_base_bdevs_discovered": 1, 00:19:30.391 "num_base_bdevs_operational": 1, 00:19:30.391 "base_bdevs_list": [ 00:19:30.391 { 00:19:30.391 "name": null, 00:19:30.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.391 "is_configured": false, 00:19:30.391 "data_offset": 0, 00:19:30.391 "data_size": 7936 00:19:30.391 }, 00:19:30.391 { 00:19:30.391 "name": "BaseBdev2", 00:19:30.391 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:30.391 "is_configured": true, 00:19:30.391 "data_offset": 256, 00:19:30.391 "data_size": 7936 00:19:30.391 } 00:19:30.391 ] 00:19:30.391 }' 00:19:30.391 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.392 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.651 "name": "raid_bdev1", 00:19:30.651 "uuid": "35a1d04c-5a68-41ae-aef1-5effcca98510", 00:19:30.651 "strip_size_kb": 0, 00:19:30.651 "state": "online", 00:19:30.651 "raid_level": "raid1", 00:19:30.651 "superblock": true, 00:19:30.651 "num_base_bdevs": 2, 00:19:30.651 "num_base_bdevs_discovered": 1, 00:19:30.651 "num_base_bdevs_operational": 1, 00:19:30.651 "base_bdevs_list": [ 00:19:30.651 { 00:19:30.651 "name": null, 00:19:30.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.651 "is_configured": false, 00:19:30.651 "data_offset": 0, 00:19:30.651 "data_size": 7936 00:19:30.651 }, 00:19:30.651 { 00:19:30.651 "name": "BaseBdev2", 00:19:30.651 "uuid": "8e082635-e293-5e96-8685-949b4a778337", 00:19:30.651 "is_configured": true, 00:19:30.651 "data_offset": 256, 00:19:30.651 "data_size": 7936 00:19:30.651 } 00:19:30.651 ] 00:19:30.651 }' 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:30.651 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.911 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:30.911 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88894 00:19:30.911 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88894 ']' 00:19:30.911 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88894 00:19:30.911 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:19:30.911 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.911 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88894 00:19:30.912 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:30.912 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:30.912 killing process with pid 88894 00:19:30.912 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88894' 00:19:30.912 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88894 00:19:30.912 Received shutdown signal, test time was about 60.000000 seconds 00:19:30.912 00:19:30.912 Latency(us) 00:19:30.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.912 =================================================================================================================== 00:19:30.912 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:30.912 [2024-09-30 12:36:42.611157] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:30.912 [2024-09-30 12:36:42.611272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.912 12:36:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88894 00:19:30.912 [2024-09-30 12:36:42.611317] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.912 [2024-09-30 12:36:42.611328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:31.172 [2024-09-30 12:36:42.884730] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.554 12:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:32.554 00:19:32.554 real 0m17.513s 00:19:32.554 user 0m22.935s 00:19:32.554 sys 0m1.698s 00:19:32.554 12:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.554 12:36:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.554 ************************************ 00:19:32.554 END TEST raid_rebuild_test_sb_md_interleaved 00:19:32.554 ************************************ 00:19:32.554 12:36:44 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:32.554 12:36:44 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:32.554 12:36:44 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88894 ']' 00:19:32.554 12:36:44 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88894 00:19:32.554 12:36:44 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:32.554 00:19:32.554 real 11m58.998s 00:19:32.554 user 16m1.490s 00:19:32.554 sys 1m56.132s 00:19:32.554 12:36:44 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.554 12:36:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.554 ************************************ 00:19:32.554 END TEST bdev_raid 00:19:32.554 ************************************ 00:19:32.554 12:36:44 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:32.554 12:36:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:32.554 12:36:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.554 12:36:44 -- common/autotest_common.sh@10 -- # set +x 00:19:32.554 ************************************ 00:19:32.554 START TEST spdkcli_raid 00:19:32.554 ************************************ 00:19:32.554 12:36:44 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:32.555 * Looking for test storage... 00:19:32.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:32.555 12:36:44 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:32.555 12:36:44 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:19:32.555 12:36:44 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:32.555 12:36:44 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.555 12:36:44 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:32.555 12:36:44 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.555 12:36:44 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.555 --rc genhtml_branch_coverage=1 00:19:32.555 --rc genhtml_function_coverage=1 00:19:32.555 --rc genhtml_legend=1 00:19:32.555 --rc geninfo_all_blocks=1 00:19:32.555 --rc geninfo_unexecuted_blocks=1 00:19:32.555 00:19:32.555 ' 00:19:32.555 12:36:44 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.555 --rc genhtml_branch_coverage=1 00:19:32.555 --rc genhtml_function_coverage=1 00:19:32.555 --rc genhtml_legend=1 00:19:32.555 --rc geninfo_all_blocks=1 00:19:32.555 --rc geninfo_unexecuted_blocks=1 00:19:32.555 00:19:32.555 ' 00:19:32.555 12:36:44 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.555 --rc genhtml_branch_coverage=1 00:19:32.555 --rc genhtml_function_coverage=1 00:19:32.555 --rc genhtml_legend=1 00:19:32.555 --rc geninfo_all_blocks=1 00:19:32.555 --rc geninfo_unexecuted_blocks=1 00:19:32.555 00:19:32.555 ' 00:19:32.555 12:36:44 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.555 --rc genhtml_branch_coverage=1 00:19:32.555 --rc genhtml_function_coverage=1 00:19:32.555 --rc genhtml_legend=1 00:19:32.555 --rc geninfo_all_blocks=1 00:19:32.555 --rc geninfo_unexecuted_blocks=1 00:19:32.555 00:19:32.555 ' 00:19:32.555 12:36:44 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:32.555 12:36:44 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:32.555 12:36:44 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:32.555 12:36:44 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:32.555 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:32.555 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:32.555 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:32.555 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:32.555 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:32.555 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:32.816 12:36:44 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:32.816 12:36:44 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.816 12:36:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89575 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:32.816 12:36:44 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89575 00:19:32.816 12:36:44 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 89575 ']' 00:19:32.816 12:36:44 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.816 12:36:44 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.816 12:36:44 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.816 12:36:44 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.816 12:36:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.816 [2024-09-30 12:36:44.572831] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:32.816 [2024-09-30 12:36:44.573365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89575 ] 00:19:33.076 [2024-09-30 12:36:44.736502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:33.076 [2024-09-30 12:36:44.935928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.076 [2024-09-30 12:36:44.935965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.014 12:36:45 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.014 12:36:45 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:19:34.014 12:36:45 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:34.014 12:36:45 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.014 12:36:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.014 12:36:45 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:34.014 12:36:45 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.014 12:36:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.014 12:36:45 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:34.014 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:34.014 ' 00:19:35.924 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:35.924 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:35.924 12:36:47 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:35.924 12:36:47 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.924 12:36:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:35.924 12:36:47 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:35.924 12:36:47 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:35.924 12:36:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:35.924 12:36:47 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:35.924 ' 00:19:36.864 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:36.864 12:36:48 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:36.864 12:36:48 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:36.864 12:36:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.123 12:36:48 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:37.123 12:36:48 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.123 12:36:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.123 12:36:48 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:37.123 12:36:48 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:37.382 12:36:49 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:37.640 12:36:49 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:37.640 12:36:49 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:37.640 12:36:49 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.640 12:36:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.640 12:36:49 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:37.640 12:36:49 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.640 12:36:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.640 12:36:49 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:37.640 ' 00:19:38.580 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:38.580 12:36:50 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:38.580 12:36:50 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.580 12:36:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.839 12:36:50 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:38.839 12:36:50 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.839 12:36:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.839 12:36:50 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:38.839 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:38.839 ' 00:19:40.218 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:40.218 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:40.218 12:36:51 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:40.218 12:36:51 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.218 12:36:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.218 12:36:51 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89575 00:19:40.218 12:36:51 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89575 ']' 00:19:40.218 12:36:51 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89575 00:19:40.218 12:36:51 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:19:40.218 12:36:51 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.218 12:36:51 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89575 00:19:40.218 killing process with pid 89575 00:19:40.218 12:36:52 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:40.218 12:36:52 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:40.218 12:36:52 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89575' 00:19:40.218 12:36:52 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 89575 00:19:40.218 12:36:52 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 89575 00:19:42.764 12:36:54 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:42.764 12:36:54 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89575 ']' 00:19:42.764 12:36:54 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89575 00:19:42.764 12:36:54 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89575 ']' 00:19:42.764 12:36:54 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89575 00:19:42.764 Process with pid 89575 is not found 00:19:42.764 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89575) - No such process 00:19:42.764 12:36:54 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 89575 is not found' 00:19:42.764 12:36:54 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:42.764 12:36:54 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:42.764 12:36:54 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:42.764 12:36:54 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:42.764 00:19:42.764 real 0m10.131s 00:19:42.764 user 0m20.628s 00:19:42.764 sys 0m1.124s 00:19:42.764 12:36:54 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:42.764 12:36:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.764 ************************************ 00:19:42.764 END TEST spdkcli_raid 00:19:42.764 ************************************ 00:19:42.764 12:36:54 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:42.764 12:36:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:42.764 12:36:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:42.764 12:36:54 -- common/autotest_common.sh@10 -- # set +x 00:19:42.764 ************************************ 00:19:42.764 START TEST blockdev_raid5f 00:19:42.764 ************************************ 00:19:42.764 12:36:54 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:42.765 * Looking for test storage... 00:19:42.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:42.765 12:36:54 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:42.765 12:36:54 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:19:42.765 12:36:54 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:42.765 12:36:54 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.765 12:36:54 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:43.043 12:36:54 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.043 12:36:54 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.043 12:36:54 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.043 12:36:54 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:43.043 12:36:54 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.043 12:36:54 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:43.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.044 --rc genhtml_branch_coverage=1 00:19:43.044 --rc genhtml_function_coverage=1 00:19:43.044 --rc genhtml_legend=1 00:19:43.044 --rc geninfo_all_blocks=1 00:19:43.044 --rc geninfo_unexecuted_blocks=1 00:19:43.044 00:19:43.044 ' 00:19:43.044 12:36:54 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:43.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.044 --rc genhtml_branch_coverage=1 00:19:43.044 --rc genhtml_function_coverage=1 00:19:43.044 --rc genhtml_legend=1 00:19:43.044 --rc geninfo_all_blocks=1 00:19:43.044 --rc geninfo_unexecuted_blocks=1 00:19:43.044 00:19:43.044 ' 00:19:43.044 12:36:54 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:43.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.044 --rc genhtml_branch_coverage=1 00:19:43.044 --rc genhtml_function_coverage=1 00:19:43.044 --rc genhtml_legend=1 00:19:43.044 --rc geninfo_all_blocks=1 00:19:43.044 --rc geninfo_unexecuted_blocks=1 00:19:43.044 00:19:43.044 ' 00:19:43.044 12:36:54 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:43.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.044 --rc genhtml_branch_coverage=1 00:19:43.044 --rc genhtml_function_coverage=1 00:19:43.044 --rc genhtml_legend=1 00:19:43.044 --rc geninfo_all_blocks=1 00:19:43.044 --rc geninfo_unexecuted_blocks=1 00:19:43.044 00:19:43.044 ' 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89852 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:43.044 12:36:54 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89852 00:19:43.044 12:36:54 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 89852 ']' 00:19:43.044 12:36:54 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.044 12:36:54 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.044 12:36:54 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.044 12:36:54 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.044 12:36:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.044 [2024-09-30 12:36:54.790141] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:43.044 [2024-09-30 12:36:54.790335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89852 ] 00:19:43.331 [2024-09-30 12:36:54.956489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.331 [2024-09-30 12:36:55.152367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.273 12:36:55 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.273 12:36:55 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:19:44.273 12:36:55 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:44.273 12:36:55 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:44.273 12:36:55 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:44.273 12:36:55 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.273 12:36:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.273 Malloc0 00:19:44.273 Malloc1 00:19:44.273 Malloc2 00:19:44.273 12:36:56 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.273 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:44.273 12:36:56 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.273 12:36:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.273 12:36:56 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.273 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:44.273 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:44.273 12:36:56 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.273 12:36:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.273 12:36:56 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.273 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:44.273 12:36:56 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.273 12:36:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.533 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.533 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:44.533 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:44.533 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.533 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:44.533 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:44.533 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "985324c8-1c39-4bba-9ec3-9caea945b4b9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "985324c8-1c39-4bba-9ec3-9caea945b4b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "985324c8-1c39-4bba-9ec3-9caea945b4b9",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "db321bd0-c714-471d-90e8-169ad9eee082",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "9b8648b9-600e-475f-b90a-e9f17447b09c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "75ac10c8-af4e-47fd-b0de-c641f8fca4eb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:44.533 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:44.533 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:44.533 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:44.533 12:36:56 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89852 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 89852 ']' 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 89852 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89852 00:19:44.533 killing process with pid 89852 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89852' 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 89852 00:19:44.533 12:36:56 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 89852 00:19:47.074 12:36:58 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:47.074 12:36:58 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:47.074 12:36:58 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:47.074 12:36:58 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:47.074 12:36:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:47.336 ************************************ 00:19:47.336 START TEST bdev_hello_world 00:19:47.336 ************************************ 00:19:47.336 12:36:58 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:47.336 [2024-09-30 12:36:59.060317] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:47.336 [2024-09-30 12:36:59.060438] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89919 ] 00:19:47.336 [2024-09-30 12:36:59.224080] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.595 [2024-09-30 12:36:59.421648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.165 [2024-09-30 12:36:59.917268] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:48.165 [2024-09-30 12:36:59.917417] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:48.165 [2024-09-30 12:36:59.917441] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:48.165 [2024-09-30 12:36:59.917917] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:48.165 [2024-09-30 12:36:59.918076] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:48.165 [2024-09-30 12:36:59.918094] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:48.165 [2024-09-30 12:36:59.918142] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:48.165 00:19:48.165 [2024-09-30 12:36:59.918161] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:49.545 00:19:49.545 real 0m2.375s 00:19:49.545 user 0m1.983s 00:19:49.545 sys 0m0.272s 00:19:49.545 ************************************ 00:19:49.545 END TEST bdev_hello_world 00:19:49.545 ************************************ 00:19:49.545 12:37:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:49.545 12:37:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:49.545 12:37:01 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:49.545 12:37:01 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:49.545 12:37:01 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:49.545 12:37:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.545 ************************************ 00:19:49.545 START TEST bdev_bounds 00:19:49.545 ************************************ 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89962 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:49.545 Process bdevio pid: 89962 00:19:49.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89962' 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89962 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 89962 ']' 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:49.545 12:37:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:49.805 [2024-09-30 12:37:01.521297] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:49.805 [2024-09-30 12:37:01.521501] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89962 ] 00:19:49.805 [2024-09-30 12:37:01.693825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:50.065 [2024-09-30 12:37:01.893879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.065 [2024-09-30 12:37:01.894154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.065 [2024-09-30 12:37:01.894154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.634 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.634 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:19:50.634 12:37:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:50.634 I/O targets: 00:19:50.634 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:50.634 00:19:50.634 00:19:50.634 CUnit - A unit testing framework for C - Version 2.1-3 00:19:50.634 http://cunit.sourceforge.net/ 00:19:50.634 00:19:50.634 00:19:50.634 Suite: bdevio tests on: raid5f 00:19:50.634 Test: blockdev write read block ...passed 00:19:50.634 Test: blockdev write zeroes read block ...passed 00:19:50.894 Test: blockdev write zeroes read no split ...passed 00:19:50.894 Test: blockdev write zeroes read split ...passed 00:19:50.894 Test: blockdev write zeroes read split partial ...passed 00:19:50.894 Test: blockdev reset ...passed 00:19:50.894 Test: blockdev write read 8 blocks ...passed 00:19:50.894 Test: blockdev write read size > 128k ...passed 00:19:50.894 Test: blockdev write read invalid size ...passed 00:19:50.894 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:50.894 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:50.894 Test: blockdev write read max offset ...passed 00:19:50.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:50.894 Test: blockdev writev readv 8 blocks ...passed 00:19:50.894 Test: blockdev writev readv 30 x 1block ...passed 00:19:50.894 Test: blockdev writev readv block ...passed 00:19:50.894 Test: blockdev writev readv size > 128k ...passed 00:19:50.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:50.894 Test: blockdev comparev and writev ...passed 00:19:50.894 Test: blockdev nvme passthru rw ...passed 00:19:50.894 Test: blockdev nvme passthru vendor specific ...passed 00:19:50.894 Test: blockdev nvme admin passthru ...passed 00:19:50.894 Test: blockdev copy ...passed 00:19:50.894 00:19:50.894 Run Summary: Type Total Ran Passed Failed Inactive 00:19:50.894 suites 1 1 n/a 0 0 00:19:50.894 tests 23 23 23 0 0 00:19:50.894 asserts 130 130 130 0 n/a 00:19:50.894 00:19:50.894 Elapsed time = 0.621 seconds 00:19:50.894 0 00:19:51.153 12:37:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89962 00:19:51.153 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 89962 ']' 00:19:51.153 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 89962 00:19:51.153 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:51.153 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.153 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89962 00:19:51.153 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:51.153 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:51.153 killing process with pid 89962 00:19:51.153 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89962' 00:19:51.153 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 89962 00:19:51.153 12:37:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 89962 00:19:52.534 ************************************ 00:19:52.534 END TEST bdev_bounds 00:19:52.534 ************************************ 00:19:52.534 12:37:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:52.534 00:19:52.534 real 0m2.864s 00:19:52.534 user 0m6.716s 00:19:52.534 sys 0m0.408s 00:19:52.534 12:37:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:52.534 12:37:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:52.534 12:37:04 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:52.534 12:37:04 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:52.534 12:37:04 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:52.534 12:37:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.534 ************************************ 00:19:52.534 START TEST bdev_nbd 00:19:52.534 ************************************ 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:52.534 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90027 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90027 /var/tmp/spdk-nbd.sock 00:19:52.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 90027 ']' 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.535 12:37:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:52.795 [2024-09-30 12:37:04.459720] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:52.795 [2024-09-30 12:37:04.459914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.795 [2024-09-30 12:37:04.625504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.055 [2024-09-30 12:37:04.818727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:53.625 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:53.885 1+0 records in 00:19:53.885 1+0 records out 00:19:53.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441214 s, 9.3 MB/s 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:53.885 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:54.144 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:54.144 { 00:19:54.144 "nbd_device": "/dev/nbd0", 00:19:54.144 "bdev_name": "raid5f" 00:19:54.144 } 00:19:54.144 ]' 00:19:54.144 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:54.144 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:54.144 { 00:19:54.144 "nbd_device": "/dev/nbd0", 00:19:54.144 "bdev_name": "raid5f" 00:19:54.144 } 00:19:54.144 ]' 00:19:54.144 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:54.144 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:54.144 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:54.145 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:54.145 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:54.145 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:54.145 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:54.145 12:37:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:54.405 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.665 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:54.665 /dev/nbd0 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:54.926 1+0 records in 00:19:54.926 1+0 records out 00:19:54.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604201 s, 6.8 MB/s 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:54.926 { 00:19:54.926 "nbd_device": "/dev/nbd0", 00:19:54.926 "bdev_name": "raid5f" 00:19:54.926 } 00:19:54.926 ]' 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:54.926 { 00:19:54.926 "nbd_device": "/dev/nbd0", 00:19:54.926 "bdev_name": "raid5f" 00:19:54.926 } 00:19:54.926 ]' 00:19:54.926 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:55.186 256+0 records in 00:19:55.186 256+0 records out 00:19:55.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146152 s, 71.7 MB/s 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:55.186 256+0 records in 00:19:55.186 256+0 records out 00:19:55.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326917 s, 32.1 MB/s 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.186 12:37:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:55.445 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:55.445 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:55.445 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:55.445 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.445 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.445 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:55.445 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:55.445 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.445 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:55.445 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.445 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:55.704 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:55.963 malloc_lvol_verify 00:19:55.963 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:55.963 e2acda73-12f0-48f3-b885-b38ab04daf2f 00:19:55.963 12:37:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:56.222 09d16bde-ac92-4a13-ac33-2ec7523ad667 00:19:56.222 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:56.481 /dev/nbd0 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:56.481 mke2fs 1.47.0 (5-Feb-2023) 00:19:56.481 Discarding device blocks: 0/4096 done 00:19:56.481 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:56.481 00:19:56.481 Allocating group tables: 0/1 done 00:19:56.481 Writing inode tables: 0/1 done 00:19:56.481 Creating journal (1024 blocks): done 00:19:56.481 Writing superblocks and filesystem accounting information: 0/1 done 00:19:56.481 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:56.481 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90027 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 90027 ']' 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 90027 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90027 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90027' 00:19:56.739 killing process with pid 90027 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 90027 00:19:56.739 12:37:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 90027 00:19:58.646 12:37:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:58.646 00:19:58.646 real 0m5.668s 00:19:58.646 user 0m7.522s 00:19:58.646 sys 0m1.333s 00:19:58.646 12:37:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:58.646 12:37:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:58.646 ************************************ 00:19:58.646 END TEST bdev_nbd 00:19:58.646 ************************************ 00:19:58.646 12:37:10 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:58.646 12:37:10 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:58.646 12:37:10 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:58.646 12:37:10 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:58.646 12:37:10 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:58.646 12:37:10 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:58.646 12:37:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:58.646 ************************************ 00:19:58.646 START TEST bdev_fio 00:19:58.646 ************************************ 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:58.646 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:58.646 ************************************ 00:19:58.646 START TEST bdev_fio_rw_verify 00:19:58.646 ************************************ 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:58.646 12:37:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:58.646 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:58.646 fio-3.35 00:19:58.646 Starting 1 thread 00:20:10.866 00:20:10.866 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90229: Mon Sep 30 12:37:21 2024 00:20:10.866 read: IOPS=12.6k, BW=49.4MiB/s (51.8MB/s)(494MiB/10001msec) 00:20:10.866 slat (usec): min=16, max=295, avg=18.55, stdev= 3.01 00:20:10.866 clat (usec): min=10, max=971, avg=126.92, stdev=46.48 00:20:10.866 lat (usec): min=29, max=990, avg=145.47, stdev=47.28 00:20:10.866 clat percentiles (usec): 00:20:10.866 | 50.000th=[ 131], 99.000th=[ 210], 99.900th=[ 375], 99.990th=[ 799], 00:20:10.866 | 99.999th=[ 955] 00:20:10.866 write: IOPS=13.3k, BW=51.9MiB/s (54.5MB/s)(513MiB/9878msec); 0 zone resets 00:20:10.866 slat (usec): min=7, max=2586, avg=15.85, stdev= 8.00 00:20:10.866 clat (usec): min=58, max=2954, avg=290.75, stdev=43.51 00:20:10.866 lat (usec): min=75, max=2969, avg=306.60, stdev=44.97 00:20:10.866 clat percentiles (usec): 00:20:10.866 | 50.000th=[ 297], 99.000th=[ 379], 99.900th=[ 586], 99.990th=[ 1074], 00:20:10.866 | 99.999th=[ 2933] 00:20:10.866 bw ( KiB/s): min=50634, max=54792, per=98.71%, avg=52508.32, stdev=1465.54, samples=19 00:20:10.866 iops : min=12658, max=13698, avg=13127.05, stdev=366.42, samples=19 00:20:10.866 lat (usec) : 20=0.01%, 50=0.01%, 100=17.02%, 250=39.35%, 500=43.52% 00:20:10.866 lat (usec) : 750=0.08%, 1000=0.02% 00:20:10.866 lat (msec) : 2=0.01%, 4=0.01% 00:20:10.866 cpu : usr=98.79%, sys=0.45%, ctx=18, majf=0, minf=10355 00:20:10.866 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.866 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.866 issued rwts: total=126475,131364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.866 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:10.866 00:20:10.866 Run status group 0 (all jobs): 00:20:10.866 READ: bw=49.4MiB/s (51.8MB/s), 49.4MiB/s-49.4MiB/s (51.8MB/s-51.8MB/s), io=494MiB (518MB), run=10001-10001msec 00:20:10.866 WRITE: bw=51.9MiB/s (54.5MB/s), 51.9MiB/s-51.9MiB/s (54.5MB/s-54.5MB/s), io=513MiB (538MB), run=9878-9878msec 00:20:11.126 ----------------------------------------------------- 00:20:11.126 Suppressions used: 00:20:11.126 count bytes template 00:20:11.126 1 7 /usr/src/fio/parse.c 00:20:11.126 950 91200 /usr/src/fio/iolog.c 00:20:11.126 1 8 libtcmalloc_minimal.so 00:20:11.126 1 904 libcrypto.so 00:20:11.126 ----------------------------------------------------- 00:20:11.126 00:20:11.126 00:20:11.126 real 0m12.674s 00:20:11.126 user 0m12.984s 00:20:11.126 sys 0m0.637s 00:20:11.126 12:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:11.126 12:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:11.126 ************************************ 00:20:11.126 END TEST bdev_fio_rw_verify 00:20:11.126 ************************************ 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:20:11.126 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:11.387 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:20:11.387 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:20:11.387 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:20:11.387 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:20:11.388 12:37:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "985324c8-1c39-4bba-9ec3-9caea945b4b9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "985324c8-1c39-4bba-9ec3-9caea945b4b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "985324c8-1c39-4bba-9ec3-9caea945b4b9",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "db321bd0-c714-471d-90e8-169ad9eee082",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "9b8648b9-600e-475f-b90a-e9f17447b09c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "75ac10c8-af4e-47fd-b0de-c641f8fca4eb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:11.388 12:37:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:11.388 12:37:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:11.388 12:37:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:11.388 /home/vagrant/spdk_repo/spdk 00:20:11.388 12:37:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:11.388 12:37:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:11.388 12:37:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:11.388 00:20:11.388 real 0m12.980s 00:20:11.388 user 0m13.115s 00:20:11.388 sys 0m0.782s 00:20:11.388 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:11.388 12:37:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:11.388 ************************************ 00:20:11.388 END TEST bdev_fio 00:20:11.388 ************************************ 00:20:11.388 12:37:23 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:11.388 12:37:23 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:11.388 12:37:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:11.388 12:37:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:11.388 12:37:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:11.388 ************************************ 00:20:11.388 START TEST bdev_verify 00:20:11.388 ************************************ 00:20:11.388 12:37:23 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:11.388 [2024-09-30 12:37:23.249834] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:11.388 [2024-09-30 12:37:23.249948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90387 ] 00:20:11.648 [2024-09-30 12:37:23.398050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:11.908 [2024-09-30 12:37:23.593763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.908 [2024-09-30 12:37:23.593852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.478 Running I/O for 5 seconds... 00:20:17.455 12418.00 IOPS, 48.51 MiB/s 11746.00 IOPS, 45.88 MiB/s 11487.00 IOPS, 44.87 MiB/s 11353.50 IOPS, 44.35 MiB/s 11254.00 IOPS, 43.96 MiB/s 00:20:17.455 Latency(us) 00:20:17.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.455 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:17.455 Verification LBA range: start 0x0 length 0x2000 00:20:17.455 raid5f : 5.02 6511.77 25.44 0.00 0.00 29563.92 170.82 38920.94 00:20:17.455 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:17.455 Verification LBA range: start 0x2000 length 0x2000 00:20:17.455 raid5f : 5.02 4751.11 18.56 0.00 0.00 40598.23 243.26 30907.81 00:20:17.455 =================================================================================================================== 00:20:17.455 Total : 11262.89 44.00 0.00 0.00 34220.80 170.82 38920.94 00:20:18.836 00:20:18.836 real 0m7.555s 00:20:18.836 user 0m13.776s 00:20:18.836 sys 0m0.273s 00:20:18.836 12:37:30 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:18.836 ************************************ 00:20:18.836 END TEST bdev_verify 00:20:18.836 ************************************ 00:20:18.836 12:37:30 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:19.097 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:19.097 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:19.097 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:19.097 12:37:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:19.097 ************************************ 00:20:19.097 START TEST bdev_verify_big_io 00:20:19.097 ************************************ 00:20:19.097 12:37:30 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:19.097 [2024-09-30 12:37:30.887282] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:19.097 [2024-09-30 12:37:30.887493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90491 ] 00:20:19.357 [2024-09-30 12:37:31.058713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:19.644 [2024-09-30 12:37:31.311842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.644 [2024-09-30 12:37:31.311878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.243 Running I/O for 5 seconds... 00:20:25.395 633.00 IOPS, 39.56 MiB/s 760.00 IOPS, 47.50 MiB/s 761.33 IOPS, 47.58 MiB/s 793.25 IOPS, 49.58 MiB/s 799.40 IOPS, 49.96 MiB/s 00:20:25.395 Latency(us) 00:20:25.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.395 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:25.395 Verification LBA range: start 0x0 length 0x200 00:20:25.395 raid5f : 5.11 447.45 27.97 0.00 0.00 7165390.81 232.52 318693.84 00:20:25.395 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:25.395 Verification LBA range: start 0x200 length 0x200 00:20:25.395 raid5f : 5.23 363.74 22.73 0.00 0.00 8697000.02 191.39 379135.78 00:20:25.395 =================================================================================================================== 00:20:25.395 Total : 811.19 50.70 0.00 0.00 7861177.95 191.39 379135.78 00:20:27.304 00:20:27.304 real 0m7.999s 00:20:27.304 user 0m14.433s 00:20:27.304 sys 0m0.402s 00:20:27.304 12:37:38 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:27.304 12:37:38 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.304 ************************************ 00:20:27.304 END TEST bdev_verify_big_io 00:20:27.304 ************************************ 00:20:27.304 12:37:38 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.304 12:37:38 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:27.304 12:37:38 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:27.304 12:37:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:27.304 ************************************ 00:20:27.304 START TEST bdev_write_zeroes 00:20:27.304 ************************************ 00:20:27.304 12:37:38 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.304 [2024-09-30 12:37:38.953857] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:27.304 [2024-09-30 12:37:38.953989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90595 ] 00:20:27.304 [2024-09-30 12:37:39.117013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.565 [2024-09-30 12:37:39.363858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.134 Running I/O for 1 seconds... 00:20:29.510 29823.00 IOPS, 116.50 MiB/s 00:20:29.510 Latency(us) 00:20:29.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.510 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.510 raid5f : 1.01 29794.36 116.38 0.00 0.00 4283.23 1452.38 6067.09 00:20:29.510 =================================================================================================================== 00:20:29.510 Total : 29794.36 116.38 0.00 0.00 4283.23 1452.38 6067.09 00:20:30.890 00:20:30.890 real 0m3.710s 00:20:30.890 user 0m3.215s 00:20:30.890 sys 0m0.362s 00:20:30.890 12:37:42 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.890 12:37:42 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:30.890 ************************************ 00:20:30.890 END TEST bdev_write_zeroes 00:20:30.890 ************************************ 00:20:30.890 12:37:42 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.890 12:37:42 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:30.890 12:37:42 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.890 12:37:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:30.890 ************************************ 00:20:30.890 START TEST bdev_json_nonenclosed 00:20:30.890 ************************************ 00:20:30.890 12:37:42 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.890 [2024-09-30 12:37:42.735508] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:30.891 [2024-09-30 12:37:42.735617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90648 ] 00:20:31.150 [2024-09-30 12:37:42.905882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.410 [2024-09-30 12:37:43.169442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.410 [2024-09-30 12:37:43.169553] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:31.410 [2024-09-30 12:37:43.169580] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:31.410 [2024-09-30 12:37:43.169591] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:31.979 00:20:31.979 real 0m0.950s 00:20:31.979 user 0m0.665s 00:20:31.979 sys 0m0.179s 00:20:31.979 12:37:43 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:31.979 12:37:43 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:31.979 ************************************ 00:20:31.979 END TEST bdev_json_nonenclosed 00:20:31.979 ************************************ 00:20:31.979 12:37:43 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:31.979 12:37:43 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:31.979 12:37:43 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:31.979 12:37:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:31.979 ************************************ 00:20:31.979 START TEST bdev_json_nonarray 00:20:31.979 ************************************ 00:20:31.979 12:37:43 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:31.979 [2024-09-30 12:37:43.767329] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:31.979 [2024-09-30 12:37:43.767469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90679 ] 00:20:32.239 [2024-09-30 12:37:43.935997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.498 [2024-09-30 12:37:44.187402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.498 [2024-09-30 12:37:44.187515] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:32.498 [2024-09-30 12:37:44.187544] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:32.498 [2024-09-30 12:37:44.187555] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:32.757 00:20:32.757 real 0m0.939s 00:20:32.757 user 0m0.654s 00:20:32.757 sys 0m0.179s 00:20:32.757 12:37:44 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:32.757 12:37:44 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:32.757 ************************************ 00:20:32.757 END TEST bdev_json_nonarray 00:20:32.757 ************************************ 00:20:33.017 12:37:44 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:33.017 12:37:44 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:33.017 12:37:44 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:33.017 12:37:44 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:33.017 12:37:44 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:33.017 12:37:44 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:33.017 12:37:44 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:33.017 12:37:44 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:33.017 12:37:44 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:33.017 12:37:44 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:33.017 12:37:44 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:33.017 00:20:33.017 real 0m50.255s 00:20:33.017 user 1m6.619s 00:20:33.017 sys 0m5.365s 00:20:33.017 12:37:44 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:33.017 12:37:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:33.017 ************************************ 00:20:33.017 END TEST blockdev_raid5f 00:20:33.017 ************************************ 00:20:33.017 12:37:44 -- spdk/autotest.sh@194 -- # uname -s 00:20:33.017 12:37:44 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:33.017 12:37:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:33.017 12:37:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:33.017 12:37:44 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@256 -- # timing_exit lib 00:20:33.017 12:37:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:33.017 12:37:44 -- common/autotest_common.sh@10 -- # set +x 00:20:33.017 12:37:44 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:33.017 12:37:44 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:20:33.017 12:37:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:33.017 12:37:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:33.017 12:37:44 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:20:33.017 12:37:44 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:20:33.017 12:37:44 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:20:33.017 12:37:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:33.017 12:37:44 -- common/autotest_common.sh@10 -- # set +x 00:20:33.017 12:37:44 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:20:33.017 12:37:44 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:20:33.017 12:37:44 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:33.017 12:37:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.556 INFO: APP EXITING 00:20:35.556 INFO: killing all VMs 00:20:35.556 INFO: killing vhost app 00:20:35.556 INFO: EXIT DONE 00:20:35.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:35.815 Waiting for block devices as requested 00:20:35.815 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:36.075 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:37.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.016 Cleaning 00:20:37.016 Removing: /var/run/dpdk/spdk0/config 00:20:37.016 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:37.016 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:37.016 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:37.016 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:37.016 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:37.016 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:37.016 Removing: /dev/shm/spdk_tgt_trace.pid56808 00:20:37.016 Removing: /var/run/dpdk/spdk0 00:20:37.016 Removing: /var/run/dpdk/spdk_pid56567 00:20:37.016 Removing: /var/run/dpdk/spdk_pid56808 00:20:37.016 Removing: /var/run/dpdk/spdk_pid57048 00:20:37.016 Removing: /var/run/dpdk/spdk_pid57152 00:20:37.016 Removing: /var/run/dpdk/spdk_pid57208 00:20:37.016 Removing: /var/run/dpdk/spdk_pid57336 00:20:37.016 Removing: /var/run/dpdk/spdk_pid57354 00:20:37.016 Removing: /var/run/dpdk/spdk_pid57569 00:20:37.016 Removing: /var/run/dpdk/spdk_pid57681 00:20:37.016 Removing: /var/run/dpdk/spdk_pid57788 00:20:37.016 Removing: /var/run/dpdk/spdk_pid57910 00:20:37.016 Removing: /var/run/dpdk/spdk_pid58018 00:20:37.016 Removing: /var/run/dpdk/spdk_pid58058 00:20:37.016 Removing: /var/run/dpdk/spdk_pid58100 00:20:37.016 Removing: /var/run/dpdk/spdk_pid58176 00:20:37.016 Removing: /var/run/dpdk/spdk_pid58304 00:20:37.016 Removing: /var/run/dpdk/spdk_pid58746 00:20:37.016 Removing: /var/run/dpdk/spdk_pid58821 00:20:37.277 Removing: /var/run/dpdk/spdk_pid58895 00:20:37.277 Removing: /var/run/dpdk/spdk_pid58916 00:20:37.277 Removing: /var/run/dpdk/spdk_pid59068 00:20:37.277 Removing: /var/run/dpdk/spdk_pid59089 00:20:37.277 Removing: /var/run/dpdk/spdk_pid59234 00:20:37.277 Removing: /var/run/dpdk/spdk_pid59255 00:20:37.277 Removing: /var/run/dpdk/spdk_pid59327 00:20:37.277 Removing: /var/run/dpdk/spdk_pid59345 00:20:37.277 Removing: /var/run/dpdk/spdk_pid59409 00:20:37.277 Removing: /var/run/dpdk/spdk_pid59432 00:20:37.277 Removing: /var/run/dpdk/spdk_pid59633 00:20:37.277 Removing: /var/run/dpdk/spdk_pid59664 00:20:37.277 Removing: /var/run/dpdk/spdk_pid59753 00:20:37.277 Removing: /var/run/dpdk/spdk_pid61101 00:20:37.277 Removing: /var/run/dpdk/spdk_pid61312 00:20:37.277 Removing: /var/run/dpdk/spdk_pid61459 00:20:37.277 Removing: /var/run/dpdk/spdk_pid62105 00:20:37.277 Removing: /var/run/dpdk/spdk_pid62317 00:20:37.277 Removing: /var/run/dpdk/spdk_pid62458 00:20:37.277 Removing: /var/run/dpdk/spdk_pid63101 00:20:37.277 Removing: /var/run/dpdk/spdk_pid63430 00:20:37.277 Removing: /var/run/dpdk/spdk_pid63577 00:20:37.277 Removing: /var/run/dpdk/spdk_pid64961 00:20:37.277 Removing: /var/run/dpdk/spdk_pid65210 00:20:37.277 Removing: /var/run/dpdk/spdk_pid65355 00:20:37.277 Removing: /var/run/dpdk/spdk_pid66746 00:20:37.277 Removing: /var/run/dpdk/spdk_pid66994 00:20:37.277 Removing: /var/run/dpdk/spdk_pid67139 00:20:37.277 Removing: /var/run/dpdk/spdk_pid68519 00:20:37.277 Removing: /var/run/dpdk/spdk_pid68965 00:20:37.277 Removing: /var/run/dpdk/spdk_pid69110 00:20:37.277 Removing: /var/run/dpdk/spdk_pid70596 00:20:37.277 Removing: /var/run/dpdk/spdk_pid70861 00:20:37.277 Removing: /var/run/dpdk/spdk_pid71007 00:20:37.277 Removing: /var/run/dpdk/spdk_pid72500 00:20:37.277 Removing: /var/run/dpdk/spdk_pid72763 00:20:37.277 Removing: /var/run/dpdk/spdk_pid72914 00:20:37.277 Removing: /var/run/dpdk/spdk_pid74405 00:20:37.277 Removing: /var/run/dpdk/spdk_pid74892 00:20:37.277 Removing: /var/run/dpdk/spdk_pid75038 00:20:37.277 Removing: /var/run/dpdk/spdk_pid75187 00:20:37.277 Removing: /var/run/dpdk/spdk_pid75605 00:20:37.277 Removing: /var/run/dpdk/spdk_pid76336 00:20:37.277 Removing: /var/run/dpdk/spdk_pid76714 00:20:37.277 Removing: /var/run/dpdk/spdk_pid77397 00:20:37.277 Removing: /var/run/dpdk/spdk_pid77844 00:20:37.277 Removing: /var/run/dpdk/spdk_pid78592 00:20:37.277 Removing: /var/run/dpdk/spdk_pid79020 00:20:37.277 Removing: /var/run/dpdk/spdk_pid80984 00:20:37.277 Removing: /var/run/dpdk/spdk_pid81422 00:20:37.277 Removing: /var/run/dpdk/spdk_pid81857 00:20:37.277 Removing: /var/run/dpdk/spdk_pid83957 00:20:37.277 Removing: /var/run/dpdk/spdk_pid84443 00:20:37.277 Removing: /var/run/dpdk/spdk_pid84960 00:20:37.277 Removing: /var/run/dpdk/spdk_pid86030 00:20:37.277 Removing: /var/run/dpdk/spdk_pid86358 00:20:37.277 Removing: /var/run/dpdk/spdk_pid87304 00:20:37.537 Removing: /var/run/dpdk/spdk_pid87627 00:20:37.537 Removing: /var/run/dpdk/spdk_pid88565 00:20:37.537 Removing: /var/run/dpdk/spdk_pid88894 00:20:37.537 Removing: /var/run/dpdk/spdk_pid89575 00:20:37.537 Removing: /var/run/dpdk/spdk_pid89852 00:20:37.537 Removing: /var/run/dpdk/spdk_pid89919 00:20:37.537 Removing: /var/run/dpdk/spdk_pid89962 00:20:37.537 Removing: /var/run/dpdk/spdk_pid90214 00:20:37.537 Removing: /var/run/dpdk/spdk_pid90387 00:20:37.537 Removing: /var/run/dpdk/spdk_pid90491 00:20:37.537 Removing: /var/run/dpdk/spdk_pid90595 00:20:37.537 Removing: /var/run/dpdk/spdk_pid90648 00:20:37.537 Removing: /var/run/dpdk/spdk_pid90679 00:20:37.537 Clean 00:20:37.537 12:37:49 -- common/autotest_common.sh@1451 -- # return 0 00:20:37.537 12:37:49 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:20:37.537 12:37:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.537 12:37:49 -- common/autotest_common.sh@10 -- # set +x 00:20:37.537 12:37:49 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:20:37.537 12:37:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.537 12:37:49 -- common/autotest_common.sh@10 -- # set +x 00:20:37.537 12:37:49 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:37.796 12:37:49 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:37.796 12:37:49 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:37.796 12:37:49 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:20:37.796 12:37:49 -- spdk/autotest.sh@394 -- # hostname 00:20:37.796 12:37:49 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:37.796 geninfo: WARNING: invalid characters removed from testname! 00:21:04.362 12:38:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:05.303 12:38:17 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:07.207 12:38:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:09.117 12:38:20 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:11.659 12:38:22 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:13.042 12:38:24 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:14.977 12:38:26 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:15.248 12:38:26 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:21:15.248 12:38:26 -- common/autotest_common.sh@1681 -- $ lcov --version 00:21:15.248 12:38:26 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:21:15.248 12:38:26 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:21:15.248 12:38:26 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:21:15.248 12:38:26 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:21:15.248 12:38:26 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:21:15.248 12:38:26 -- scripts/common.sh@336 -- $ IFS=.-: 00:21:15.248 12:38:26 -- scripts/common.sh@336 -- $ read -ra ver1 00:21:15.248 12:38:26 -- scripts/common.sh@337 -- $ IFS=.-: 00:21:15.248 12:38:26 -- scripts/common.sh@337 -- $ read -ra ver2 00:21:15.248 12:38:26 -- scripts/common.sh@338 -- $ local 'op=<' 00:21:15.248 12:38:26 -- scripts/common.sh@340 -- $ ver1_l=2 00:21:15.248 12:38:26 -- scripts/common.sh@341 -- $ ver2_l=1 00:21:15.248 12:38:26 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:21:15.248 12:38:26 -- scripts/common.sh@344 -- $ case "$op" in 00:21:15.248 12:38:26 -- scripts/common.sh@345 -- $ : 1 00:21:15.248 12:38:26 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:21:15.248 12:38:26 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.248 12:38:26 -- scripts/common.sh@365 -- $ decimal 1 00:21:15.248 12:38:26 -- scripts/common.sh@353 -- $ local d=1 00:21:15.248 12:38:26 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:21:15.248 12:38:26 -- scripts/common.sh@355 -- $ echo 1 00:21:15.249 12:38:26 -- scripts/common.sh@365 -- $ ver1[v]=1 00:21:15.249 12:38:26 -- scripts/common.sh@366 -- $ decimal 2 00:21:15.249 12:38:26 -- scripts/common.sh@353 -- $ local d=2 00:21:15.249 12:38:26 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:21:15.249 12:38:26 -- scripts/common.sh@355 -- $ echo 2 00:21:15.249 12:38:26 -- scripts/common.sh@366 -- $ ver2[v]=2 00:21:15.249 12:38:26 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:21:15.249 12:38:26 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:21:15.249 12:38:26 -- scripts/common.sh@368 -- $ return 0 00:21:15.249 12:38:26 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.249 12:38:26 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:21:15.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.249 --rc genhtml_branch_coverage=1 00:21:15.249 --rc genhtml_function_coverage=1 00:21:15.249 --rc genhtml_legend=1 00:21:15.249 --rc geninfo_all_blocks=1 00:21:15.249 --rc geninfo_unexecuted_blocks=1 00:21:15.249 00:21:15.249 ' 00:21:15.249 12:38:26 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:21:15.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.249 --rc genhtml_branch_coverage=1 00:21:15.249 --rc genhtml_function_coverage=1 00:21:15.249 --rc genhtml_legend=1 00:21:15.249 --rc geninfo_all_blocks=1 00:21:15.249 --rc geninfo_unexecuted_blocks=1 00:21:15.249 00:21:15.249 ' 00:21:15.249 12:38:26 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:21:15.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.249 --rc genhtml_branch_coverage=1 00:21:15.249 --rc genhtml_function_coverage=1 00:21:15.249 --rc genhtml_legend=1 00:21:15.249 --rc geninfo_all_blocks=1 00:21:15.249 --rc geninfo_unexecuted_blocks=1 00:21:15.249 00:21:15.249 ' 00:21:15.249 12:38:26 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:21:15.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.249 --rc genhtml_branch_coverage=1 00:21:15.249 --rc genhtml_function_coverage=1 00:21:15.249 --rc genhtml_legend=1 00:21:15.249 --rc geninfo_all_blocks=1 00:21:15.249 --rc geninfo_unexecuted_blocks=1 00:21:15.249 00:21:15.249 ' 00:21:15.249 12:38:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:15.249 12:38:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:21:15.249 12:38:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:15.249 12:38:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.249 12:38:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.249 12:38:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.249 12:38:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.249 12:38:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.249 12:38:26 -- paths/export.sh@5 -- $ export PATH 00:21:15.249 12:38:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.249 12:38:26 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:15.249 12:38:26 -- common/autobuild_common.sh@479 -- $ date +%s 00:21:15.249 12:38:26 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727699906.XXXXXX 00:21:15.249 12:38:26 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727699906.NhPwpF 00:21:15.249 12:38:26 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:21:15.249 12:38:26 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:21:15.249 12:38:26 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:21:15.249 12:38:26 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:15.249 12:38:26 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:15.249 12:38:27 -- common/autobuild_common.sh@495 -- $ get_config_params 00:21:15.249 12:38:27 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:21:15.249 12:38:27 -- common/autotest_common.sh@10 -- $ set +x 00:21:15.249 12:38:27 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:21:15.249 12:38:27 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:21:15.249 12:38:27 -- pm/common@17 -- $ local monitor 00:21:15.249 12:38:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:15.249 12:38:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:15.249 12:38:27 -- pm/common@25 -- $ sleep 1 00:21:15.249 12:38:27 -- pm/common@21 -- $ date +%s 00:21:15.249 12:38:27 -- pm/common@21 -- $ date +%s 00:21:15.249 12:38:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727699907 00:21:15.249 12:38:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727699907 00:21:15.249 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727699907_collect-cpu-load.pm.log 00:21:15.249 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727699907_collect-vmstat.pm.log 00:21:16.207 12:38:28 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:21:16.207 12:38:28 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:21:16.207 12:38:28 -- spdk/autopackage.sh@14 -- $ timing_finish 00:21:16.207 12:38:28 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:16.207 12:38:28 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:16.207 12:38:28 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:16.207 12:38:28 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:21:16.207 12:38:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:21:16.207 12:38:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:21:16.207 12:38:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:16.207 12:38:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:21:16.207 12:38:28 -- pm/common@44 -- $ pid=92195 00:21:16.207 12:38:28 -- pm/common@50 -- $ kill -TERM 92195 00:21:16.207 12:38:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:16.207 12:38:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:21:16.468 12:38:28 -- pm/common@44 -- $ pid=92197 00:21:16.468 12:38:28 -- pm/common@50 -- $ kill -TERM 92197 00:21:16.468 + [[ -n 5422 ]] 00:21:16.468 + sudo kill 5422 00:21:16.478 [Pipeline] } 00:21:16.494 [Pipeline] // timeout 00:21:16.501 [Pipeline] } 00:21:16.519 [Pipeline] // stage 00:21:16.526 [Pipeline] } 00:21:16.541 [Pipeline] // catchError 00:21:16.552 [Pipeline] stage 00:21:16.553 [Pipeline] { (Stop VM) 00:21:16.568 [Pipeline] sh 00:21:16.858 + vagrant halt 00:21:19.401 ==> default: Halting domain... 00:21:27.544 [Pipeline] sh 00:21:27.827 + vagrant destroy -f 00:21:30.372 ==> default: Removing domain... 00:21:30.385 [Pipeline] sh 00:21:30.674 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:30.685 [Pipeline] } 00:21:30.701 [Pipeline] // stage 00:21:30.707 [Pipeline] } 00:21:30.721 [Pipeline] // dir 00:21:30.727 [Pipeline] } 00:21:30.741 [Pipeline] // wrap 00:21:30.749 [Pipeline] } 00:21:30.762 [Pipeline] // catchError 00:21:30.771 [Pipeline] stage 00:21:30.773 [Pipeline] { (Epilogue) 00:21:30.785 [Pipeline] sh 00:21:31.071 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:35.287 [Pipeline] catchError 00:21:35.289 [Pipeline] { 00:21:35.302 [Pipeline] sh 00:21:35.589 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:35.589 Artifacts sizes are good 00:21:35.600 [Pipeline] } 00:21:35.614 [Pipeline] // catchError 00:21:35.625 [Pipeline] archiveArtifacts 00:21:35.633 Archiving artifacts 00:21:35.751 [Pipeline] cleanWs 00:21:35.764 [WS-CLEANUP] Deleting project workspace... 00:21:35.764 [WS-CLEANUP] Deferred wipeout is used... 00:21:35.772 [WS-CLEANUP] done 00:21:35.774 [Pipeline] } 00:21:35.790 [Pipeline] // stage 00:21:35.795 [Pipeline] } 00:21:35.808 [Pipeline] // node 00:21:35.813 [Pipeline] End of Pipeline 00:21:35.861 Finished: SUCCESS